query_id
stringlengths 32
32
| query
stringlengths 0
35.7k
| positive_passages
listlengths 1
7
| negative_passages
listlengths 22
29
| subset
stringclasses 2
values |
---|---|---|---|---|
939b1c9c5b746e18175e27596c62d788 | A Pinch of Humor for Short-Text Conversation: An Information Retrieval Approach | [
{
"docid": "3ea104489fb5ac5b3e671659f8498530",
"text": "In this paper, we present our work of humor recognition on Twitter, which will facilitate affect and sentimental analysis in the social network. The central question of what makes a tweet (Twitter post) humorous drives us to design humor-related features, which are derived from influential humor theories, linguistic norms, and affective dimensions. Using machine learning techniques, we are able to recognize humorous tweets with high accuracy and F-measure. More importantly, we single out features that contribute to distinguishing non-humorous tweets from humorous tweets, and humorous tweets from other short humorous texts (non-tweets). This proves that humorous tweets possess discernible characteristics that are neither found in plain tweets nor in humorous non-tweets. We believe our novel findings will inform and inspire the burgeoning field of computational humor research in the social media.",
"title": ""
},
{
"docid": "7577dac903003b812c63ea20d09183c8",
"text": "Humor is one of the most interesting and puzzling aspects of human behavior. Despite the attention it has received in fields such as philosophy, linguistics, and psychology, there have been only few attempts to create computational models for humor recognition or generation. In this article, we bring empirical evidence that computational approaches can be successfully applied to the task of humor recognition. Through experiments performed on very large data sets, we show that automatic classification techniques can be effectively used to distinguish between humorous and non-humorous texts, with significant improvements observed over a priori known baselines.",
"title": ""
}
] | [
{
"docid": "1d9f683409c3d6f19b9b6738a1a76c4a",
"text": "The empirical fact that classifiers, trained on given data collections, perform poorly when tested on data acquired in different settings is theoretically explained in domain adaptation through a shift among distributions of the source and target domains. Alleviating the domain shift problem, especially in the challenging setting where no labeled data are available for the target domain, is paramount for having visual recognition systems working in the wild. As the problem stems from a shift among distributions, intuitively one should try to align them. In the literature, this has resulted in a stream of works attempting to align the feature representations learned from the source and target domains. Here we take a different route. Rather than introducing regularization terms aiming to promote the alignment of the two representations, we act at the distribution level through the introduction of DomaIn Alignment Layers (DIAL), able to match the observed source and target data distributions to a reference one. Thorough experiments on three different public benchmarks we confirm the power of our approach. ∗This work was partially supported by the ERC grant 637076 RoboExNovo (B.C.), and the CHIST-ERA project ALOOF (B.C, F. M. C.).",
"title": ""
},
{
"docid": "8f174607776cd7dc8c69739183121fcc",
"text": "We empirically evaluate several state-of-the-art methods for constructing ensembles of heterogeneous classifiers with stacking and show that they perform (at best) comparably to selecting the best classifier from the ensemble by cross validation. Among state-of-the-art stacking methods, stacking with probability distributions and multi-response linear regression performs best. We propose two extensions of this method, one using an extended set of meta-level features and the other using multi-response model trees to learn at the meta-level. We show that the latter extension performs better than existing stacking approaches and better than selecting the best classifier by cross validation.",
"title": ""
},
{
"docid": "51066d24144efe6456f8169f8e60a561",
"text": "Face biometric systems are vulnerable to spoofing attacks. Such attacks can be performed in many ways, including presenting a falsified image, video or 3D mask of a valid user. A widely used approach for differentiating genuine faces from fake ones has been to capture their inherent differences in (2D or 3D) texture using local descriptors. One limitation of these methods is that they may fail if an unseen attack type, e.g. a highly realistic 3D mask which resembles real skin texture, is used in spoofing. Here we propose a robust anti-spoofing method by detecting pulse from face videos. Based on the fact that a pulse signal exists in a real living face but not in any mask or print material, the method could be a generalized solution for face liveness detection. The proposed method is evaluated first on a 3D mask spoofing database 3DMAD to demonstrate its effectiveness in detecting 3D mask attacks. More importantly, our cross-database experiment with high quality REAL-F masks shows that the pulse based method is able to detect even the previously unseen mask type whereas texture based methods fail to generalize beyond the development data. Finally, we propose a robust cascade system combining two complementary attack-specific spoof detectors, i.e. utilize pulse detection against print attacks and color texture analysis against video attacks.",
"title": ""
},
{
"docid": "85edcb9c02a0153c94ae62852188a830",
"text": "Calcaneonavicular coalition is a congenital anomaly characterized by a connection between the calcaneus and the navicular. Surgery is required in case of chronic pain and after failure of conservative treatment. The authors present here the surgical technique and results of a 2-portals endoscopic resection of a calcaneonavicular synostosis. Both visualization and working portals must be identified with accuracy around the tarsal coalition with fluoroscopic control and according to the localization of the superficial peroneus nerve, to avoid neurologic damages during the resection. The endoscopic procedure provides a better visualization of the whole resection area and allows to achieve a complete resection and avoid plantar residual bone bar. The other important advantage of the endoscopic technique is the possibility to assess and treat in the same procedure-associated pathologies such as degenerative changes in the lateral side of the talar head with debridement and resection.",
"title": ""
},
{
"docid": "53d5bfb8654783bae8a09de651b63dd7",
"text": "-This paper introduces a new image thresholding method based on minimizing the measures of fuzziness of an input image. The membership function in the thresholding method is used to denote the characteristic relationship between a pixel and its belonging region (the object or the background). In addition, based on the measure of fuzziness, a fuzzy range is defined to find the adequate threshold value within this range. The principle of the method is easy to understand and it can be directly extended to multilevel thresholding. The effectiveness of the new method is illustrated by using the test images of having various types of histograms. The experimental results indicate that the proposed method has demonstrated good performance in bilevel and trilevel thresholding. Image thresholding Measure of fuzziness Fuzzy membership function I. I N T R O D U C T I O N Image thresholding which extracts the object from the background in an input image is one of the most common applications in image analysis. For example, in automatic recognition of machine printed or handwritten texts, in shape recognition of objects, and in image enhancement, thresholding is a necessary step for image preprocessing. Among the image thresholding methods, bilevel thresholding separates the pixels of an image into two regions (i.e. the object and the background); one region contains pixels with gray values smaller than the threshold value and the other contains pixels with gray values larger than the threshold value. Further, if the pixels of an image are divided into more than two regions, this is called multilevel thresholding. In general, the threshold is located at the obvious and deep valley of the histogram. However, when the valley is not so obvious, it is very difficult to determine the threshold. During the past decade, many research studies have been devoted to the problem of selecting the appropriate threshold value. The survey of these papers can be seen in the literature31-3) Fuzzy set theory has been applied to image thresholding to partition the image space into meaningful regions by minimizing the measure of fuzziness of the image. The measurement can be expressed by terms such as entropy, {4) index of fuzziness, ~5) and index of nonfuzziness36) The \"ent ropy\" involves using Shannon's function to measure the fuzziness of an image so that the threshold can be determined by minimizing the entropy measure. It is very different from the classical entropy measure which measures t Author to whom correspondence should be addressed. probabil ist ic information. The index of fuzziness represents the average amount of fuzziness in an image by measuring the distance between the gray-level image and its near crisp (binary) version. The index of nonfuzziness indicates the average amount of nonfuzziness (crispness) in an image by taking the absolute difference between the crisp version and its complement. In addition, Pal and Rosenfeld ~7) developed an algorithm based on minimizing the compactness of fuzziness to obtain the fuzzy and nonfuzzy versions of an ill-defined image such that the appropriate nonfuzzy threshold can be chosen. They used some fuzzy geometric properties, i.e. the area and the perimeter of an fuzzy image, to obtain the measure of compactness. The effectiveness of the method has been illustrated by using two input images of bimodal and unimodal histograms. Another measurement, which is called the index of area converge (IOAC), ts) has been applied to select the threshold by finding the local minima of the IOAC. Since both the measures of compactness and the IOAC involve the spatial information of an image, they need a long time to compute the perimeter of the fuzzy plane. In this paper, based on the concept of fuzzy set, an effective thresholding method is proposed. Given a certain threshold value, the membership function of a pixel is defined by the absolute difference between the gray level and the average gray level of its belonging region (i.e. the object or the background). The larger the absolute difference is, the smaller the membership value becomes. It is expected that the membership value of each pixel in the input image is as large as possible. In addition, two measures of fuzziness are proposed to indicate the fuzziness of an image. The optimal threshold can then be effectively determined by minimizing the measure of fuzziness of an image. The performance of the proposed approach is compared",
"title": ""
},
{
"docid": "8fd43b39e748d47c02b66ee0d8eecc65",
"text": "One standing problem in the area of web-based e-learning is how to support instructional designers to effectively and efficiently retrieve learning materials, appropriate for their educational purposes. Learning materials can be retrieved from structured repositories, such as repositories of Learning Objects and Massive Open Online Courses; they could also come from unstructured sources, such as web hypertext pages. Platforms for distance education often implement algorithms for recommending specific educational resources and personalized learning paths to students. But choosing and sequencing the adequate learning materials to build adaptive courses may reveal to be quite a challenging task. In particular, establishing the prerequisite relationships among learning objects, in terms of prior requirements needed to understand and complete before making use of the subsequent contents, is a crucial step for faculty, instructional designers or automated systems whose goal is to adapt existing learning objects to delivery in new distance courses. Nevertheless, this information is often missing. In this paper, an innovative machine learning-based approach for the identification of prerequisites between text-based resources is proposed. A feature selection methodology allows us to consider the attributes that are most relevant to the predictive modeling problem. These features are extracted from both the input material and weak-taxonomies available on the web. Input data undergoes a Natural language process that makes finding patterns of interest more easy for the applied automated analysis. Finally, the prerequisite identification is cast to a binary statistical classification task. The accuracy of the approach is validated by means of experimental evaluations on real online coursers covering different subjects.",
"title": ""
},
{
"docid": "ec1120018899c6c9fe16240b8e35efac",
"text": "Redundant collagen deposition at sites of healing dermal wounds results in hypertrophic scars. Adipose-derived stem cells (ADSCs) exhibit promise in a variety of anti-fibrosis applications by attenuating collagen deposition. The objective of this study was to explore the influence of an intralesional injection of ADSCs on hypertrophic scar formation by using an established rabbit ear model. Twelve New Zealand albino rabbits were equally divided into three groups, and six identical punch defects were made on each ear. On postoperative day 14 when all wounds were completely re-epithelialized, the first group received an intralesional injection of ADSCs on their right ears and Dulbecco’s modified Eagle’s medium (DMEM) on their left ears as an internal control. Rabbits in the second group were injected with conditioned medium of the ADSCs (ADSCs-CM) on their right ears and DMEM on their left ears as an internal control. Right ears of the third group remained untreated, and left ears received DMEM. We quantified scar hypertrophy by measuring the scar elevation index (SEI) on postoperative days 14, 21, 28, and 35 with ultrasonography. Wounds were harvested 35 days later for histomorphometric and gene expression analysis. Intralesional injections of ADSCs or ADSCs-CM both led to scars with a far more normal appearance and significantly decreased SEI (44.04 % and 32.48 %, respectively, both P <0.01) in the rabbit ears compared with their internal controls. Furthermore, we confirmed that collagen was organized more regularly and that there was a decreased expression of alpha-smooth muscle actin (α-SMA) and collagen type Ι in the ADSC- and ADSCs-CM-injected scars according to histomorphometric and real-time quantitative polymerase chain reaction analysis. There was no difference between DMEM-injected and untreated scars. An intralesional injection of ADSCs reduces the formation of rabbit ear hypertrophic scars by decreasing the α-SMA and collagen type Ι gene expression and ameliorating collagen deposition and this may result in an effective and innovative anti-scarring therapy.",
"title": ""
},
{
"docid": "0250d6bb0bcf11ca8af6c2661c1f7f57",
"text": "Chemoreception is a biological process essential for the survival of animals, as it allows the recognition of important volatile cues for the detection of food, egg-laying substrates, mates, or predators, among other purposes. Furthermore, its role in pheromone detection may contribute to evolutionary processes, such as reproductive isolation and speciation. This key role in several vital biological processes makes chemoreception a particularly interesting system for studying the role of natural selection in molecular adaptation. Two major gene families are involved in the perireceptor events of the chemosensory system: the odorant-binding protein (OBP) and chemosensory protein (CSP) families. Here, we have conducted an exhaustive comparative genomic analysis of these gene families in 20 Arthropoda species. We show that the evolution of the OBP and CSP gene families is highly dynamic, with a high number of gains and losses of genes, pseudogenes, and independent origins of subfamilies. Taken together, our data clearly support the birth-and-death model for the evolution of these gene families with an overall high gene turnover rate. Moreover, we show that the genome organization of the two families is significantly more clustered than expected by chance and, more important, that this pattern appears to be actively maintained across the Drosophila phylogeny. Finally, we suggest the homologous nature of the OBP and CSP gene families, dating back their most recent common ancestor after the terrestrialization of Arthropoda (380--450 Ma) and we propose a scenario for the origin and diversification of these families.",
"title": ""
},
{
"docid": "3f90af944ed7603fa7bbe8780239116a",
"text": "Display advertising has been a significant source of revenue for publishers and ad networks in online advertising ecosystem. One important business model in online display advertising is Ad Exchange marketplace, also called non-guaranteed delivery (NGD), in which advertisers buy targeted page views and audiences on a spot market through real-time auction. In this paper, we describe a bid landscape forecasting system in NGD marketplace for any advertiser campaign specified by a variety of targeting attributes. In the system, the impressions that satisfy the campaign targeting attributes are partitioned into multiple mutually exclusive samples. Each sample is one unique combination of quantified attribute values. We develop a divide-and-conquer approach that breaks down the campaign-level forecasting problem. First, utilizing a novel star-tree data structure, we forecast the bid for each sample using non-linear regression by gradient boosting decision trees. Then we employ a mixture-of-log-normal model to generate campaign-level bid distribution based on the sample-level forecasted distributions. The experiment results of a system developed with our approach show that it can accurately forecast the bid distributions for various campaigns running on the world's largest NGD advertising exchange system, outperforming two baseline methods in term of forecasting errors.",
"title": ""
},
{
"docid": "1ee33813e4d8710a620c4bd47817f774",
"text": "This research work concerns the perceptual evaluation of the performance of information systems (IS) and more particularly, the construct of user satisfaction. Faced with the difficulty of obtaining objective measures for the success of IS, user satisfaction appeared as a substitutive measure of IS performance (DeLone & McLean, 1992). Some researchers have indeed shown that the evaluation of an IS could not happen without an analysis of the feelings and perceptions of individuals who make use of it. Consequently, the concept of satisfaction has been considered as a guarantee of the performance of an IS. Also it has become necessary to ponder the drivers of user satisfaction. The analysis of models and measurement tools for satisfaction as well as the adoption of a contingency perspective has allowed the description of principal dimensions that have a direct or less direct impact on user perceptions\n The case study of a large French group, carried out through an interpretativist approach conducted by way of 41 semi-structured interviews, allowed the conceptualization of the problematique of perceptual evaluation of IS in a particular field study. This study led us to confirm the impact of certain factors (such as perceived usefulness, participation, the quality of relations with the IS Function and its resources and also the fit of IS with user needs). On the contrary, other dimensions regarded as fundamental do not receive any consideration or see their influence nuanced in the case studied (the properties of IS, the ease of use, the quality of information). Lastly, this study has allowed for the identification of the influence of certain contingency and contextual variables on user satisfaction and, above all, for the description of the importance of interactions between the IS Function and the users",
"title": ""
},
{
"docid": "732aa9623301d4d3cc6fc9d15c6836fe",
"text": "Growing network traffic brings huge pressure to the server cluster. Using load balancing technology in server cluster becomes the choice of most enterprises. Because of many limitations, the development of the traditional load balancing technology has encountered bottlenecks. This has forced companies to find new load balancing method. Software Defined Network (SDN) provides a good method to solve the load balancing problem. In this paper, we implemented two load balancing algorithm that based on the latest SDN network architecture. The first one is a static scheduling algorithm and the second is a dynamic scheduling algorithm. Our experiments show that the performance of the dynamic algorithm is better than the static algorithm.",
"title": ""
},
{
"docid": "e5ab552986fc1ef93ea898ffc85ce0f9",
"text": "As Cloud computing is reforming the infrastructure of IT industries, it has become one of the critical security concerns of the defensive mechanisms applied to secure Cloud environment. Even if there are tremendous advancements in defense systems regarding the confidentiality, authentication and access control, there is still a challenge to provide security against availability of associated resources. Denial-of-service (DoS) attack and distributed denial-of-service (DDoS) attack can primarily compromise availability of the system services and can be easily started by using various tools, leading to financial damage or affecting the reputation. These attacks are very difficult to detect and filter, since packets that cause the attack are very much similar to legitimate traffic. DoS attack is considered as the biggest threat to IT industry, and intensity, size and frequency of the attack are observed to be increasing every year. Therefore, there is a need for stronger and universal method to impede these attacks. In this paper, we present an overview of DoS attack and distributed DoS attack that can be carried out in Cloud environment and possible defensive mechanisms, tools and devices. In addition, we discuss many open issues and challenges in defending Cloud environment against DoS attack. This provides better understanding of the DDoS attack problem in Cloud computing environment, current solution space, and future research scope to deal with such attacks efficiently.",
"title": ""
},
{
"docid": "8b86b1a60595bc9557d796a3bf22772f",
"text": "Orchid plants are the members of Orchidaceae consisting of more than 25,000 species, which are distributed almost all over the world but more abundantly in the tropics. There are 177 genera, 1,125 species of orchids that originated in Thailand. Orchid plant collected from different nurseries showing Chlorotic and mosaic symptoms were observed on Vanda plants and it was suspected to infect with virus. So the symptomatic plants were tested for Cymbidium Mosaic Virus (CYMV), Odontoglossum ring spot virus (ORSV), Poty virus and Tomato Spotted Wilt Virus (TSWV) with Direct Antigen CoatingEnzyme Linked Immunosorbent Assay (DAC-ELISA) and further confirmed by Transmission Electron Microscopy (TEM). With the two methods CYMV and ORSV were detected positively from the suspected imported samples and low positive results were observed for Potex, Poty virus and Tomato Spotted Wilt Virus (TSWV).",
"title": ""
},
{
"docid": "acb569b267eae92a6e33b52725f28833",
"text": "A multi-objective design procedure is applied to the design of a close-coupled inductor for a three-phase interleaved 140kW DC-DC converter. For the multi-objective optimization, a genetic algorithm is used in combination with a detailed physical model of the inductive component. From the solution of the optimization, important conclusions about the advantages and disadvantages of using close-coupled inductors compared to separate inductors can be drawn.",
"title": ""
},
{
"docid": "f4df305ad32ebdd1006eefdec6ee7ca3",
"text": "In highly social species such as humans, faces have evolved to convey rich information for social interaction, including expressions of emotions and pain [1-3]. Two motor pathways control facial movement [4-7]: a subcortical extrapyramidal motor system drives spontaneous facial expressions of felt emotions, and a cortical pyramidal motor system controls voluntary facial expressions. The pyramidal system enables humans to simulate facial expressions of emotions not actually experienced. Their simulation is so successful that they can deceive most observers [8-11]. However, machine vision may be able to distinguish deceptive facial signals from genuine facial signals by identifying the subtle differences between pyramidally and extrapyramidally driven movements. Here, we show that human observers could not discriminate real expressions of pain from faked expressions of pain better than chance, and after training human observers, we improved accuracy to a modest 55%. However, a computer vision system that automatically measures facial movements and performs pattern recognition on those movements attained 85% accuracy. The machine system's superiority is attributable to its ability to differentiate the dynamics of genuine expressions from faked expressions. Thus, by revealing the dynamics of facial action through machine vision systems, our approach has the potential to elucidate behavioral fingerprints of neural control systems involved in emotional signaling.",
"title": ""
},
{
"docid": "b2d3ce62b38ac8d7bd0a7b7a2ff7d663",
"text": "It is proposed that using Ethernet in the fronthaul, between base station baseband unit (BBU) pools and remote radio heads (RRHs), can bring a number of advantages, from use of lower-cost equipment, shared use of infrastructure with fixed access networks, to obtaining statistical multiplexing and optimized performance through probe-based monitoring and software-defined networking. However, a number of challenges exist: ultra-high-bit-rate requirements from the transport of increased bandwidth radio streams for multiple antennas in future mobile networks, and low latency and jitter to meet delay requirements and the demands of joint processing. A new fronthaul functional division is proposed which can alleviate the most demanding bit-rate requirements by transport of baseband signals instead of sampled radio waveforms, and enable statistical multiplexing gains. Delay and synchronization issues remain to be solved.",
"title": ""
},
{
"docid": "217742ed285e8de40d68188566475126",
"text": "It has been proposed that D-amino acid oxidase (DAO) plays an essential role in degrading D-serine, an endogenous coagonist of N-methyl-D-aspartate (NMDA) glutamate receptors. DAO shows genetic association with amyotrophic lateral sclerosis (ALS) and schizophrenia, in whose pathophysiology aberrant metabolism of D-serine is implicated. Although the pathology of both essentially involves the forebrain, in rodents, enzymatic activity of DAO is hindbrain-shifted and absent in the region. Here, we show activity-based distribution of DAO in the central nervous system (CNS) of humans compared with that of mice. DAO activity in humans was generally higher than that in mice. In the human forebrain, DAO activity was distributed in the subcortical white matter and the posterior limb of internal capsule, while it was almost undetectable in those areas in mice. In the lower brain centers, DAO activity was detected in the gray and white matters in a coordinated fashion in both humans and mice. In humans, DAO activity was prominent along the corticospinal tract, rubrospinal tract, nigrostriatal system, ponto-/olivo-cerebellar fibers, and in the anterolateral system. In contrast, in mice, the reticulospinal tract and ponto-/olivo-cerebellar fibers were the major pathways showing strong DAO activity. In the human corticospinal tract, activity-based staining of DAO did not merge with a motoneuronal marker, but colocalized mostly with excitatory amino acid transporter 2 and in part with GFAP, suggesting that DAO activity-positive cells are astrocytes seen mainly in the motor pathway. These findings establish the distribution of DAO activity in cerebral white matter and the motor system in humans, providing evidence to support the involvement of DAO in schizophrenia and ALS. Our results raise further questions about the regulation of D-serine in DAO-rich regions as well as the physiological/pathological roles of DAO in white matter astrocytes.",
"title": ""
},
{
"docid": "834bc1349d6da53c277ddd7eba95dc6a",
"text": "Lymphedema is a common condition frequently seen in cancer patients who have had lymph node dissection +/- radiation treatment. Traditional management is mainly non-surgical and unsatisfactory. Surgical treatment has relied on excisional techniques in the past. Physiologic operations have more recently been devised to help improve this condition. Assessing patients and deciding which of the available operations to offer them can be challenging. MRI is an extremely useful tool in patient assessment and treatment planning. J. Surg. Oncol. 2017;115:18-22. © 2016 Wiley Periodicals, Inc.",
"title": ""
},
{
"docid": "63e45222ea9627ce22e9e90fc1ca4ea1",
"text": "A soft switching three-transistor push-pull(TTPP)converter is proposed in this paper. The 3rd transistor is inserted in the primary side of a traditional push-pull converter. Two primitive transistors can achieve zero-voltage-switching (ZVS) easily under a wide load range, the 3rd transistor can also realize zero-voltage-switching assisted by leakage inductance. The rated voltage of the 3rd transistor is half of that of the main transistors. The operation theory is explained in detail. The soft-switching realization conditions are derived. An 800 W with 83.3 kHz switching frequency prototype has been built. The experimental result is provided to verify the analysis.",
"title": ""
}
] | scidocsrr |
b562b88ab5da620a35fdf35bef750fc5 | Practicing Safe Computing: A Multimedia Empirical Examination of Home Computer User Security Behavioral Intentions | [
{
"docid": "cd811b8c1324ca0fef6a25e1ca5c4ce9",
"text": "This commentary discusses why most IS academic research today lacks relevance to practice and suggests tactics, procedures, and guidelines that the IS academic community might follow in their research efforts and articles to introduce relevance to practitioners. The commentary begins by defining what is meant by relevancy in the context of academic research. It then explains why there is a lack of attention to relevance within the IS scholarly literature. Next, actions that can be taken to make relevance a more central aspect of IS research and to communicate implications of IS research more effectively to IS professionals are suggested.",
"title": ""
}
] | [
{
"docid": "7635ad3e2ac2f8e72811bf056d29dfbb",
"text": "Nowadays, many consumer videos are captured by portable devices such as iPhone. Different from constrained videos that are produced by professionals, e.g., those for broadcast, summarizing multiple handheld videos from a same scenery is a challenging task. This is because: 1) these videos have dramatic semantic and style variances, making it difficult to extract the representative key frames; 2) the handheld videos are with different degrees of shakiness, but existing summarization techniques cannot alleviate this problem adaptively; and 3) it is difficult to develop a quality model that evaluates a video summary, due to the subjectiveness of video quality assessment. To solve these problems, we propose perceptual multiattribute optimization which jointly refines multiple perceptual attributes (i.e., video aesthetics, coherence, and stability) in a multivideo summarization process. In particular, a weakly supervised learning framework is designed to discover the semantically important regions in each frame. Then, a few key frames are selected based on their contributions to cover the multivideo semantics. Thereafter, a probabilistic model is proposed to dynamically fit the key frames into an aesthetically pleasing video summary, wherein its frames are stabilized adaptively. Experiments on consumer videos taken from sceneries throughout the world demonstrate the descriptiveness, aesthetics, coherence, and stability of the generated summary.",
"title": ""
},
{
"docid": "74fa56730057ae21f438df46054041c4",
"text": "Facial fractures can lead to long-term sequelae if not repaired. Complications from surgical approaches can be equally detrimental to the patient. Periorbital approaches via the lower lid can lead to ectropion, entropion, scleral show, canthal malposition, and lid edema.1–6 Ectropion can cause epiphora, whereas entropion often causes pain and irritation due to contact between the cilia and cornea. Transcutaneous and tranconjunctival approaches are commonly used to address fractures of the infraorbital rim and orbital floor. The transconjunctival approach is popular among otolaryngologists and ophthalmologists, whereas transcutaneous approaches are more commonly used by oral maxillofacial surgeons and plastic surgeons.7Ridgwayet al reported in theirmeta-analysis that lid complications are highest with the subciliary approach (19.1%) and lowest with transconjunctival approach (2.1%).5 Raschke et al also found a lower incidence of lower lid malpositionvia the transconjunctival approach comparedwith the subciliary approach.8 Regardless of approach, complications occur and thefacial traumasurgeonmustknowhowtomanage these issues. In this article, we will review the common complications of lower lid surgery and their treatment.",
"title": ""
},
{
"docid": "c6d3f20e9d535faab83fb34cec0fdb5b",
"text": "Over the past two decades several attempts have been made to address the problem of face recognition and a voluminous literature has been produced. Current face recognition systems are able to perform very well in controlled environments e.g. frontal face recognition, where face images are acquired under frontal pose with strict constraints as defined in related face recognition standards. However, in unconstrained situations where a face may be captured in outdoor environments, under arbitrary illumination and large pose variations these systems fail to work. With the current focus of research to deal with these problems, much attention has been devoted in the facial feature extraction stage. Facial feature extraction is the most important step in face recognition. Several studies have been made to answer the questions like what features to use, how to describe them and several feature extraction techniques have been proposed. While many comprehensive literature reviews exist for face recognition a complete reference for different feature extraction techniques and their advantages/disadvantages with regards to a typical face recognition task in unconstrained scenarios is much needed. In this chapter we present a comprehensive review of the most relevant feature extraction techniques used in 2D face recognition and introduce a new feature extraction technique termed as Face-GLOH-signature to be used in face recognition for the first time (Sarfraz and Hellwich, 2008), which has a number of advantages over the commonly used feature descriptions in the context of unconstrained face recognition. The goal of feature extraction is to find a specific representation of the data that can highlight relevant information. This representation can be found by maximizing a criterion or can be a pre-defined representation. Usually, a face image is represented by a high dimensional vector containing pixel values (holistic representation) or a set of vectors where each vector summarizes the underlying content of a local region by using a high level 1",
"title": ""
},
{
"docid": "0bb73266d8e4c18503ccda4903856e44",
"text": "Recent progress in advanced driver assistance systems and the race towards autonomous vehicles is mainly driven by two factors: (1) increasingly sophisticated algorithms that interpret the environment around the vehicle and react accordingly, and (2) the continuous improvements of sensor technology itself. In terms of cameras, these improvements typically include higher spatial resolution, which as a consequence requires more data to be processed. The trend to add multiple cameras to cover the entire surrounding of the vehicle is not conducive in that matter. At the same time, an increasing number of special purpose algorithms need access to the sensor input data to correctly interpret the various complex situations that can occur, particularly in urban traffic. By observing those trends, it becomes clear that a key challenge for vision architectures in intelligent vehicles is to share computational resources. We believe this challenge should be faced by introducing a representation of the sensory data that provides compressed and structured access to all relevant visual content of the scene. The Stixel World discussed in this paper is such a representation. It is a medium-level model of the environment that is specifically designed to compress information about obstacles by leveraging the typical layout of outdoor traffic scenes. It has proven useful for a multi∗Corresponding author: [email protected] Authors contributed equally and are listed in alphabetical order Preprint submitted to Image and Vision Computing February 14, 2017 tude of automotive vision applications, including object detection, tracking, segmentation, and mapping. In this paper, we summarize the ideas behind the model and generalize it to take into account multiple dense input streams: the image itself, stereo depth maps, and semantic class probability maps that can be generated, e.g ., by deep convolutional neural networks. Our generalization is embedded into a novel mathematical formulation for the Stixel model. We further sketch how the free parameters of the model can be learned using structured SVMs.",
"title": ""
},
{
"docid": "116d0735ded06ba1dc9814f21236b7b1",
"text": "In this paper, we propose a context-aware local binary feature learning (CA-LBFL) method for face recognition. Unlike existing learning-based local face descriptors such as discriminant face descriptor (DFD) and compact binary face descriptor (CBFD) which learn each feature code individually, our CA-LBFL exploits the contextual information of adjacent bits by constraining the number of shifts from different binary bits, so that more robust information can be exploited for face representation. Given a face image, we first extract pixel difference vectors (PDV) in local patches, and learn a discriminative mapping in an unsupervised manner to project each pixel difference vector into a context-aware binary vector. Then, we perform clustering on the learned binary codes to construct a codebook, and extract a histogram feature for each face image with the learned codebook as the final representation. In order to exploit local information from different scales, we propose a context-aware local binary multi-scale feature learning (CA-LBMFL) method to jointly learn multiple projection matrices for face representation. To make the proposed methods applicable for heterogeneous face recognition, we present a coupled CA-LBFL (C-CA-LBFL) method and a coupled CA-LBMFL (C-CA-LBMFL) method to reduce the modality gap of corresponding heterogeneous faces in the feature level, respectively. Extensive experimental results on four widely used face datasets clearly show that our methods outperform most state-of-the-art face descriptors.",
"title": ""
},
{
"docid": "9e7c12fbc790314f6897f0b16d43d0af",
"text": "We study in this paper the rate of convergence for learning distributions with the Generative Adversarial Networks (GAN) framework, which subsumes Wasserstein, Sobolev and MMD GANs as special cases. We study a wide range of parametric and nonparametric target distributions, under a collection of objective evaluation metrics. On the nonparametric end, we investigate the minimax optimal rates and fundamental difficulty of the density estimation under the adversarial framework. On the parametric end, we establish theory for neural network classes, that characterizes the interplay between the choice of generator and discriminator. We investigate how to improve the GAN framework with better theoretical guarantee through the lens of regularization. We discover and isolate a new notion of regularization, called the generator/discriminator pair regularization, that sheds light on the advantage of GAN compared to classic parametric and nonparametric approaches for density estimation.",
"title": ""
},
{
"docid": "42b810b7ecd48590661cc5a538bec427",
"text": "Most algorithms that rely on deep learning-based approaches to generate 3D point sets can only produce clouds containing fixed number of points. Furthermore, they typically require large networks parameterized by many weights, which makes them hard to train. In this paper, we propose an auto-encoder architecture that can both encode and decode clouds of arbitrary size and demonstrate its effectiveness at upsampling sparse point clouds. Interestingly, we can do so using less than half as many parameters as state-of-the-art architectures while still delivering better performance. We will make our code base fully available.",
"title": ""
},
{
"docid": "103e6ecab7ccd8e11f010fb865091bd2",
"text": "The mitogen-activated protein kinase (MAPK) network is a conserved signalling module that regulates cell fate by transducing a myriad of growth-factor signals. The ability of this network to coordinate and process a variety of inputs from different growth-factor receptors into specific biological responses is, however, still not understood. We investigated how the MAPK network brings about signal specificity in PC-12 cells, a model for neuronal differentiation. Reverse engineering by modular-response analysis uncovered topological differences in the MAPK core network dependent on whether cells were activated with epidermal or neuronal growth factor (EGF or NGF). On EGF stimulation, the network exhibited negative feedback only, whereas a positive feedback was apparent on NGF stimulation. The latter allows for bi-stable Erk activation dynamics, which were indeed observed. By rewiring these regulatory feedbacks, we were able to reverse the specific cell responses to EGF and NGF. These results show that growth factor context determines the topology of the MAPK signalling network and that the resulting dynamics govern cell fate.",
"title": ""
},
{
"docid": "10d5049c354015ad93a1bff5ef346e67",
"text": "We show that a neural approach to the task of non-factoid answer reranking can benefit from the inclusion of tried-and-tested handcrafted features. We present a novel neural network architecture based on a combination of recurrent neural networks that are used to encode questions and answers, and a multilayer perceptron. We show how this approach can be combined with additional features, in particular, the discourse features presented by Jansen et al. (2014). Our neural approach achieves state-of-the-art performance on a public dataset from Yahoo! Answers and its performance is further improved by incorporating the discourse features. Additionally, we present a new dataset of Ask Ubuntu questions where the hybrid approach also achieves good results.",
"title": ""
},
{
"docid": "a0ce75c68e981d6d9a442d73f97781ad",
"text": "Cancer stem cells (CSCs), or alternatively called tumor initiating cells (TICs), are a subpopulation of tumor cells, which possesses the ability to self-renew and differentiate into bulk tumor mass. An accumulating body of evidence suggests that CSCs contribute to the growth and recurrence of tumors and the resistance to chemo- and radiotherapy. CSCs achieve self-renewal through asymmetric division, in which one daughter cell retains the self-renewal ability, and the other is destined to differentiation. Recent studies revealed the mechanisms of asymmetric division in normal stem cells (NSCs) and, to a limited degree, CSCs as well. Asymmetric division initiates when a set of polarity-determining proteins mark the apical side of mother stem cells, which arranges the unequal alignment of mitotic spindle and centrosomes along the apical-basal polarity axis. This subsequently guides the recruitment of fate-determining proteins to the basal side of mother cells. Following cytokinesis, two daughter cells unequally inherit centrosomes, differentiation-promoting fate determinants, and other proteins involved in the maintenance of stemness. Modulation of asymmetric and symmetric division of CSCs may provide new strategies for dual targeting of CSCs and the bulk tumor mass. In this review, we discuss the current understanding of the mechanisms by which NSCs and CSCs achieve asymmetric division, including the functions of polarity- and fate-determining factors.",
"title": ""
},
{
"docid": "b4103e5ddc58672334b66cc504dab5a6",
"text": "An open source project typically maintains an open bug repository so that bug reports from all over the world can be gathered. When a new bug report is submitted to the repository, a person, called a triager, examines whether it is a duplicate of an existing bug report. If it is, the triager marks it as DUPLICATE and the bug report is removed from consideration for further work. In the literature, there are approaches exploiting only natural language information to detect duplicate bug reports. In this paper we present a new approach that further involves execution information. In our approach, when a new bug report arrives, its natural language information and execution information are compared with those of the existing bug reports. Then, a small number of existing bug reports are suggested to the triager as the most similar bug reports to the new bug report. Finally, the triager examines the suggested bug reports to determine whether the new bug report duplicates an existing bug report. We calibrated our approach on a subset of the Eclipse bug repository and evaluated our approach on a subset of the Firefox bug repository. The experimental results show that our approach can detect 67%-93% of duplicate bug reports in the Firefox bug repository, compared to 43%-72% using natural language information alone.",
"title": ""
},
{
"docid": "d4d802b296b210a1957b1a214d9fd9fb",
"text": "Many task domains require robots to interpret and act upon natural language commands which are given by people and which refer to the robot’s physical surroundings. Such interpretation is known variously as the symbol grounding problem (Harnad, 1990), grounded semantics (Feldman et al., 1996) and grounded language acquisition (Nenov and Dyer, 1993, 1994). This problem is challenging because people employ diverse vocabulary and grammar, and because robots have substantial uncertainty about the nature and contents of their surroundings, making it difficult to associate the constitutive language elements (principally noun phrases and spatial relations) of the command text to elements of those surroundings. Symbolic models capture linguistic structure but have not scaled successfully to handle the diverse language produced by untrained users. Existing statistical approaches can better handle diversity, but have not to date modeled complex linguistic structure, limiting achievable accuracy. Recent hybrid approaches have addressed limitations in scaling and complexity, but have not effectively associated linguistic and perceptual features. Our framework, called Generalized Grounding Graphs (G), addresses these issues by defining a probabilistic graphical model dynamically according to the linguistic parse structure of a natural language command. This approach scales effectively, handles linguistic diversity, and enables the system to associate parts of a command with the specific objects, places, and events in the external world to which they refer. We show that robots can learn word meanings and use those learned meanings to robustly follow natural language commands produced by untrained users. We demonstrate our approach for both mobility commands (e.g. route directions like “Go down the hallway through the door”) and mobile manipulation commands (e.g. physical directives like “Pick up the pallet on the truck”) involving a variety of semi-autonomous robotic platforms, including a wheelchair, a microair vehicle, a forklift, and the Willow Garage PR2. The first two authors contributed equally to this paper. 1 ar X iv :1 71 2. 01 09 7v 1 [ cs .C L ] 2 9 N ov 2 01 7",
"title": ""
},
{
"docid": "7a72f69ad4926798e12f6fa8e598d206",
"text": "In this work, we revisit atrous convolution, a powerful tool to explicitly adjust filter’s field-of-view as well as control the resolution of feature responses computed by Deep Convolutional Neural Networks, in the application of semantic image segmentation. To handle the problem of segmenting objects at multiple scales, we design modules which employ atrous convolution in cascade or in parallel to capture multi-scale context by adopting multiple atrous rates. Furthermore, we propose to augment our previously proposed Atrous Spatial Pyramid Pooling module, which probes convolutional features at multiple scales, with image-level features encoding global context and further boost performance. We also elaborate on implementation details and share our experience on training our system. The proposed ‘DeepLabv3’ system significantly improves over our previous DeepLab versions without DenseCRF post-processing and attains comparable performance with other state-of-art models on the PASCAL VOC 2012 semantic image segmentation benchmark.",
"title": ""
},
{
"docid": "f3f2184b1fd6a62540f8547df3014b44",
"text": "Social Media Analytics is an emerging interdisciplinary research field that aims on combining, extending, and adapting methods for analysis of social media data. On the one hand it can support IS and other research disciplines to answer their research questions and on the other hand it helps to provide architectural designs as well as solution frameworks for new social media-based applications and information systems. The authors suggest that IS should contribute to this field and help to develop and process an interdisciplinary research agenda.",
"title": ""
},
{
"docid": "ea04dad2ac1de160f78fa79b33a93b6a",
"text": "OBJECTIVE\nTo construct new size charts for all fetal limb bones.\n\n\nDESIGN\nA prospective, cross sectional study.\n\n\nSETTING\nUltrasound department of a large hospital.\n\n\nSAMPLE\n663 fetuses scanned once only for the purpose of the study at gestations between 12 and 42 weeks.\n\n\nMETHODS\nCentiles were estimated by combining separate regression models fitted to the mean and standard deviation, assuming that the measurements have a normal distribution at each gestational age.\n\n\nMAIN OUTCOME MEASURES\nDetermination of fetal limb lengths from 12 to 42 weeks of gestation.\n\n\nRESULTS\nSize charts for fetal bones (radius, ulna, humerus, tibia, fibula, femur and foot) are presented and compared with previously published data.\n\n\nCONCLUSIONS\nWe present new size charts for fetal limb bones which take into consideration the increasing variability with gestational age. We have compared these charts with other published data; the differences seen may be largely due to methodological differences. As standards for fetal head and abdominal measurements have been published from the same population, we suggest that the use of the new charts may facilitate prenatal diagnosis of skeletal dysplasias.",
"title": ""
},
{
"docid": "cc4c0a749c6a3f4ac92b9709f24f03f4",
"text": "Modern GPUs with their several hundred cores and more accessible programming models are becoming attractive devices for compute-intensive applications. They are particularly well suited for applications, such as image processing, where the end result is intended to be displayed via the graphics card. One of the more versatile and powerful graphics techniques is ray tracing. However, tracing each ray of light in a scene is very computational expensive and have traditionally been preprocessed on CPUs over hours, if not days. In this paper, Nvidia’s new OptiX ray tracing engine is used to show how the power of modern graphics cards, such as the Nvidia Quadro FX 5800, can be harnessed to ray trace several scenes that represent real-life applications in real-time speeds ranging from 20.63 to 67.15 fps. Near-perfect speedup is demonstrated on dual GPUs for scenes with complex geometries. The impact on ray tracing of the recently announced Nvidia Fermi processor, is also discussed.",
"title": ""
},
{
"docid": "edb9d5cbbc7b976a009e583f9947134b",
"text": "An important part of image enhancement is color constancy, which aims to make image colors invariant to illumination. In this paper the Color Dog (CD), a new learning-based global color constancy method is proposed. Instead of providing one, it corrects the other methods’ illumination estimations by reducing their scattering in the chromaticity space by using a its previously learning partition. The proposed method outperforms all other methods on most high-quality benchmark datasets. The results are presented and discussed.",
"title": ""
},
{
"docid": "6753c9ed08f6941e1d7dd5fc283cafac",
"text": "This letter presents a wideband transformer balun with a center open stub. Since the interconnected line between two coupled-lines greatly deteriorates the performance of balun in millimeter-wave designs, the proposed center open stub provides a good solution to further optimize the balance of balun. The proposed transformer balun with center open stub has been fabricated in 90 nm CMOS technology, with a compact chip area of 0.012 mm2. The balun achieves an amplitude imbalance of less than 1 dB for a frequency band ranging from 1 to 48 GHz along with a phase imbalance of less than 5 degrees for the frequency band ranging from 2 to 47 GHz.",
"title": ""
},
{
"docid": "6b49ccb6cb443c89fd32f407cb575653",
"text": "Recently, there has been a growing interest in end-to-end speech recognition that directly transcribes speech to text without any predefined alignments. In this paper, we explore the use of attention-based encoder-decoder model for Mandarin speech recognition on a voice search task. Previous attempts have shown that applying attention-based encoder-decoder to Mandarin speech recognition was quite difficult due to the logographic orthography of Mandarin, the large vocabulary and the conditional dependency of the attention model. In this paper, we use character embedding to deal with the large vocabulary. Several tricks are used for effective model training, including L2 regularization, Gaussian weight noise and frame skipping. We compare two attention mechanisms and use attention smoothing to cover long context in the attention model. Taken together, these tricks allow us to finally achieve a character error rate (CER) of 3.58% and a sentence error rate (SER) of 7.43% on the MiTV voice search dataset. While together with a trigram language model, CER and SER reach 2.81% and 5.77%, respectively.",
"title": ""
},
{
"docid": "95db5921ba31588e962ffcd8eb6469b0",
"text": "The purpose of text clustering in information retrieval is to discover groups of semantically related documents. Accurate and comprehensible cluster descriptions (labels) let the user comprehend the collection’s content faster and are essential for various document browsing interfaces. The task of creating descriptive, sensible cluster labels is difficult—typical text clustering algorithms focus on optimizing proximity between documents inside a cluster and rely on keyword representation for describing discovered clusters. In the approach called Description Comes First (DCF) cluster labels are as important as document groups—DCF promotes machine discovery of comprehensible candidate cluster labels later used to discover related document groups. In this paper we describe an application of DCF to the k-Means algorithm, including results of experiments performed on the 20-newsgroups document collection. Experimental evaluation showed that DCF does not decrease the metrics used to assess the quality of document assignment and offers good cluster labels in return. The algorithm utilizes search engine’s data structures directly to scale to large document collections. Introduction Organizing unstructured collections of textual content into semantically related groups, from now on referred to as text clustering or clustering, provides unique ways of digesting large amounts of information. In the context of information retrieval and text mining, a general definition of clustering is the following: given a large set of documents, automatically discover diverse subsets of documents that share a similar topic. In typical applications input documents are first transformed into a mathematical model where each document is described by certain features. The most popular representation for text is the vector space model [Salton, 1989]. In the VSM, documents are expressed as rows in a matrix, where columns represent unique terms (features) and the intersection of a column and a row indicates the importance of a given word to the document. A model such as the VSM helps in calculation of similarity between documents (angle between document vectors) and thus facilitates application of various known (or modified) numerical clustering algorithms. While this is sufficient for many applications, problems arise when one needs to construct some representation of the discovered groups of documents—a label, a symbolic description for each cluster, something to represent the information that makes documents inside a cluster similar to each other and that would convey this information to the user. Cluster labeling problems are often present in modern text and Web mining applications with document browsing interfaces. The process of returning from the mathematical model of clusters to comprehensible, explanatory labels is difficult because text representation used for clustering rarely preserves the inflection and syntax of the original text. Clustering algorithms presented in literature usually fall back to the simplest form of cluster representation—a list of cluster’s keywords (most “central” terms in the cluster). Unfortunately, keywords are stripped from syntactical information and force the user to manually find the underlying concept which is often confusing. Motivation and Related Works The user of a retrieval system judges the clustering algorithm by what he sees in the output— clusters’ descriptions, not the final model which is usually incomprehensible for humans. The experiences with the text clustering framework Carrot (www.carrot2.org) resulted in posing a slightly different research problem (aligned with clustering but not exactly the same). We shifted the emphasis of a clustering method to providing comprehensible and accurate cluster labels in addition to discovery of document groups. We call this problem descriptive clustering: discovery of diverse groups of semantically related documents associated with a meaningful, comprehensible and compact text labels. This definition obviously leaves a great deal of freedom for interpretation because terms such as meaningful or accurate are very vague. We narrowed the set of requirements of descriptive clustering to the following ones: — comprehensibility understood as grammatical correctness (word order, inflection, agreement between words if applicable); — conciseness of labels. Phrases selected for a cluster label should minimize its total length (without sacrificing its comprehensibility); — transparency of the relationship between cluster label and cluster content, best explained by ability to answer questions as: “Why was this label selected for these documents?” and “Why is this document in a cluster labeled X?”. Little research has been done to address the requirements above. In the STC algorithm authors employed frequently recurring phrases as both document similarity feature and final cluster description [Zamir and Etzioni, 1999]. A follow-up work [Ferragina and Gulli, 2004] showed how to avoid certain STC limitations and use non-contiguous phrases (so-called approximate sentences). A different idea of ‘label-driven’ clustering appeared in clustering with committees algorithm [Pantel and Lin, 2002], where strongly associated terms related to unambiguous concepts were evaluated using semantic relationships from WordNet. We introduced the DCF approach in our previous work [Osiński and Weiss, 2005] and showed its feasibility using an algorithm called Lingo. Lingo used singular value decomposition of the term-document matrix to select good cluster labels among candidates extracted from the text (frequent phrases). The algorithm was designed to cluster results from Web search engines (short snippets and fragmented descriptions of original documents) and proved to provide diverse meaningful cluster labels. Lingo’s weak point is its limited scalability to full or even medium sized documents. In this",
"title": ""
}
] | scidocsrr |
48e2653f4a59a0c4f889d6b75a1c41ff | Click chain model in web search | [
{
"docid": "71be2ab6be0ab5c017c09887126053e5",
"text": "One of the most important yet insufficiently studied issues in online advertising is the externality effect among ads: the value of an ad impression on a page is affected not just by the location that the ad is placed in, but also by the set of other ads displayed on the page. For instance, a high quality competing ad can detract users from another ad, while a low quality ad could cause the viewer to abandon the page",
"title": ""
}
] | [
{
"docid": "7e4a485d489f9e9ce94889b52214c804",
"text": "A situated ontology is a world model used as a computational resource for solving a particular set of problems. It is treated as neither a \\natural\" entity waiting to be discovered nor a purely theoretical construct. This paper describes how a semantico-pragmatic analyzer, Mikrokosmos, uses knowledge from a situated ontology as well as from language-speciic knowledge sources (lexicons and microtheory rules). Also presented are some guidelines for acquiring ontological concepts and an overview of the technology developed in the Mikrokosmos project for large-scale acquisition and maintenance of ontological databases. Tools for acquiring, maintaining, and browsing ontologies can be shared more readily than ontologies themselves. Ontological knowledge bases can be shared as computational resources if such tools provide translators between diierent representation formats. 1 A Situated Ontology World models (ontologies) in computational applications are artiicially constructed entities. They are created, not discovered. This is why so many diierent world models were suggested. Many ontologies are developed for purely theoretical purposes or without the context of a practical situation (e. Many practical knowledge-based systems, on the other hand, employ world or domain models without recognizing them as a separate knowledge source (e.g., Farwell, et al. 1993). In the eld of natural language processing (NLP) there is now a consensus that all NLP systems that seek to represent and manipulate meanings of texts need an ontology (e. In our continued eeorts to build a multilingual knowledge-based machine translation (KBMT) system using an interlingual meaning representation (e.g., Onyshkevych and Nirenburg, 1994), we have developed an ontology to facilitate natural language interpretation and generation. The central goal of the Mikrokosmos project is to develop a system that produces a comprehensive Text Meaning Representation (TMR) for an input text in any of a set of source languages. 1 Knowledge that supports this process is stored both in language-speciic knowledge sources and in an independently motivated, language-neutral ontology (e. An ontology for NLP purposes is a body of knowledge about the world (or a domain) that a) is a repository of primitive symbols used in meaning representation; b) organizes these symbols in a tangled subsumption hierarchy; and c) further interconnects these symbols using a rich system of semantic and discourse-pragmatic relations deened among the concepts. In order for such an ontology to become a computational resource for solving problems such as ambiguity and reference resolution, it must be actually constructed, not merely deened formally, as is the …",
"title": ""
},
{
"docid": "7973587470f4e40f04288fb261445cac",
"text": "In developed countries, vitamin B12 (cobalamin) deficiency usually occurs in children, exclusively breastfed ones whose mothers are vegetarian, causing low body stores of vitamin B12. The haematologic manifestation of vitamin B12 deficiency is pernicious anaemia. It is a megaloblastic anaemia with high mean corpuscular volume and typical morphological features, such as hyperlobulation of the nuclei of the granulocytes. In advanced cases, neutropaenia and thrombocytopaenia can occur, simulating aplastic anaemia or leukaemia. In addition to haematological symptoms, infants may experience weakness, fatigue, failure to thrive, and irritability. Other common findings include pallor, glossitis, vomiting, diarrhoea, and icterus. Neurological symptoms may affect the central nervous system and, in severe cases, rarely cause brain atrophy. Here, we report an interesting case, a 12-month old infant, who was admitted with neurological symptoms and diagnosed with vitamin B12 deficiency.",
"title": ""
},
{
"docid": "effe9cf542849a0da41f984f7097228a",
"text": "We introduce FaceVR, a novel method for gaze-aware facial reenactment in the Virtual Reality (VR) context. The key component of FaceVR is a robust algorithm to perform real-time facial motion capture of an actor who is wearing a head-mounted display (HMD), as well as a new data-driven approach for eye tracking from monocular videos. In addition to these face reconstruction components, FaceVR incorporates photo-realistic re-rendering in real time, thus allowing artificial modifications of face and eye appearances. For instance, we can alter facial expressions, change gaze directions, or remove the VR goggles in realistic re-renderings. In a live setup with a source and a target actor, we apply these newly-introduced algorithmic components. We assume that the source actor is wearing a VR device, and we capture his facial expressions and eye movement in real-time. For the target video, we mimic a similar tracking process; however, we use the source input to drive the animations of the target video, thus enabling gaze-aware facial reenactment. To render the modified target video on a stereo display, we augment our capture and reconstruction process with stereo data. In the end, FaceVR produces compelling results for a variety of applications, such as gaze-aware facial reenactment, reenactment in virtual reality, removal of VR goggles, and re-targeting of somebody's gaze direction in a video conferencing call.",
"title": ""
},
{
"docid": "edba38e0515256fbb2e72fce87747472",
"text": "The risk of predation can have large effects on ecological communities via changes in prey behaviour, morphology and reproduction. Although prey can use a variety of sensory signals to detect predation risk, relatively little is known regarding the effects of predator acoustic cues on prey foraging behaviour. Here we show that an ecologically important marine crab species can detect sound across a range of frequencies, probably in response to particle acceleration. Further, crabs suppress their resource consumption in the presence of experimental acoustic stimuli from multiple predatory fish species, and the sign and strength of this response is similar to that elicited by water-borne chemical cues. When acoustic and chemical cues were combined, consumption differed from expectations based on independent cue effects, suggesting redundancies among cue types. These results highlight that predator acoustic cues may influence prey behaviour across a range of vertebrate and invertebrate taxa, with the potential for cascading effects on resource abundance.",
"title": ""
},
{
"docid": "52462bd444f44910c18b419475a6c235",
"text": "Snoring is a common symptom of serious chronic disease known as obstructive sleep apnea (OSA). Knowledge about the location of obstruction site (VVelum, OOropharyngeal lateral walls, T-Tongue, E-Epiglottis) in the upper airways is necessary for proper surgical treatment. In this paper we propose a dual source-filter model similar to the source-filter model of speech to approximate the generation process of snore audio. The first filter models the vocal tract from lungs to the point of obstruction with white noise excitation from the lungs. The second filter models the vocal tract from the obstruction point to the lips/nose with impulse train excitation which represents vibrations at the point of obstruction. The filter coefficients are estimated using the closed and open phases of the snore beat cycle. VOTE classification is done by using SVM classifier and filter coefficients as features. The classification experiments are performed on the development set (283 snore audios) of the MUNICH-PASSAU SNORE SOUND CORPUS (MPSSC). We obtain an unweighted average recall (UAR) of 49.58%, which is higher than the INTERSPEECH-2017 snoring sub-challenge baseline technique by ∼3% (absolute).",
"title": ""
},
{
"docid": "c250963a2b536a9ce9149f385f4d2a0f",
"text": "The systematic review (SR) is a methodology used to find and aggregate all relevant existing evidence about a specific research question of interest. One of the activities associated with the SR process is the selection of primary studies, which is a time consuming manual task. The quality of primary study selection impacts the overall quality of SR. The goal of this paper is to propose a strategy named “Score Citation Automatic Selection” (SCAS), to automate part of the primary study selection activity. The SCAS strategy combines two different features, content and citation relationships between the studies, to make the selection activity as automated as possible. Aiming to evaluate the feasibility of our strategy, we conducted an exploratory case study to compare the accuracy of selecting primary studies manually and using the SCAS strategy. The case study shows that for three SRs published in the literature and previously conducted in a manual implementation, the average effort reduction was 58.2 % when applying the SCAS strategy to automate part of the initial selection of primary studies, and the percentage error was 12.98 %. Our case study provided confidence in our strategy, and suggested that it can reduce the effort required to select the primary studies without adversely affecting the overall results of SR.",
"title": ""
},
{
"docid": "db4784e051b798dfa6c3efa5e84c4d00",
"text": "Purpose – The purpose of this paper is to propose and verify that the technology acceptance model (TAM) can be employed to explain and predict the acceptance of mobile learning (M-learning); an activity in which users access learning material with their mobile devices. The study identifies two factors that account for individual differences, i.e. perceived enjoyment (PE) and perceived mobility value (PMV), to enhance the explanatory power of the model. Design/methodology/approach – An online survey was conducted to collect data. A total of 313 undergraduate and graduate students in two Taiwan universities answered the questionnaire. Most of the constructs in the model were measured using existing scales, while some measurement items were created specifically for this research. Structural equation modeling was employed to examine the fit of the data with the model by using the LISREL software. Findings – The results of the data analysis shows that the data fit the extended TAM model well. Consumers hold positive attitudes for M-learning, viewing M-learning as an efficient tool. Specifically, the results show that individual differences have a great impact on user acceptance and that the perceived enjoyment and perceived mobility can predict user intentions of using M-learning. Originality/value – There is scant research available in the literature on user acceptance of M-learning from a customer’s perspective. The present research shows that TAM can predict user acceptance of this new technology. Perceived enjoyment and perceived mobility value are antecedents of user acceptance. The model enhances our understanding of consumer motivation of using M-learning. This understanding can aid our efforts when promoting M-learning.",
"title": ""
},
{
"docid": "996eb4470d33f00ed9cb9bcc52eb5d82",
"text": "Andrew is a distributed computing environment that is a synthesis of the personal computing and timesharing paradigms. When mature, it is expected to encompass over 5,000 workstations spanning the Carnegie Mellon University campus. This paper examines the security issues that arise in such an environment and describes the mechanisms that have been developed to address them. These mechanisms include the logical and physical separation of servers and clients, support for secure communication at the remote procedure call level, a distributed authentication service, a file-protection scheme that combines access lists with UNIX mode bits, and the use of encryption as a basic building block. The paper also discusses the assumptions underlying security in Andrew and analyzes the vulnerability of the system. Usage experience reveals that resource control, particularly of workstation CPU cycles, is more important than originally anticipated and that the mechanisms available to address this issue are rudimentary.",
"title": ""
},
{
"docid": "ba57246214ea44910e94471375836d87",
"text": "Collaborative filtering is a technique for recommending documents to users based on how similar their tastes are to other users. If two users tend to agree on what they like, the system will recommend the same documents to them. The generalized vector space model of information retrieval represents a document by a vector of its similarities to all other documents. The process of collaborative filtering is nearly identical to the process of retrieval using GVSM in a matrix of user ratings. Using this observation, a model for filtering collaboratively using document content is possible.",
"title": ""
},
{
"docid": "774bf4b0a2c8fe48607e020da2737041",
"text": "A class of three-dimensional planar arrays in substrate integrated waveguide (SIW) technology is proposed, designed and demonstrated with 8 × 16 elements at 35 GHz for millimeter-wave imaging radar system applications. Endfire element is generally chosen to ensure initial high gain and broadband characteristics for the array. Fermi-TSA (tapered slot antenna) structure is used as element to reduce the beamwidth. Corrugation is introduced to reduce the resulting antenna physical width without degradation of performance. The achieved measured gain in our demonstration is about 18.4 dBi. A taper shaped air gap in the center is created to reduce the coupling between two adjacent elements. An SIW H-to-E-plane vertical interconnect is proposed in this three-dimensional architecture and optimized to connect eight 1 × 16 planar array sheets to the 1 × 8 final network. The overall architecture is exclusively fabricated by the conventional PCB process. Thus, the developed SIW feeder leads to a significant reduction in both weight and cost, compared to the metallic waveguide-based counterpart. A complete antenna structure is designed and fabricated. The planar array ensures a gain of 27 dBi with low SLL of 26 dB and beamwidth as narrow as 5.15 degrees in the E-plane and 6.20 degrees in the 45°-plane.",
"title": ""
},
{
"docid": "e99369633599d38d84ad1a5c74695475",
"text": "Sarcasm is a form of language in which individual convey their message in an implicit way i.e. the opposite of what is implied. Sarcasm detection is the task of predicting sarcasm in text. This is the crucial step in sentiment analysis due to inherently ambiguous nature of sarcasm. With this ambiguity, sarcasm detection has always been a difficult task, even for humans. Therefore sarcasm detection has gained importance in many Natural Language Processing applications. In this paper, we describe approaches, issues, challenges and future scopes in sarcasm detection.",
"title": ""
},
{
"docid": "8877d6753d6b7cd39ba36c074ca56b00",
"text": "Perhaps the most fundamental application of affective computing will be Human-Computer Interaction (HCI) in which the computer should have the ability to detect and track the user's affective states, and make corresponding feedback. The human multi-sensor affect system defines the expectation of multimodal affect analyzer. In this paper, we present our efforts toward audio-visual HCI-related affect recognition. With HCI applications in mind, we take into account some special affective states which indicate users' cognitive/motivational states. Facing the fact that a facial expression is influenced by both an affective state and speech content, we apply a smoothing method to extract the information of the affective state from facial features. In our fusion stage, a voting method is applied to combine audio and visual modalities so that the final affect recognition accuracy is greatly improved. We test our bimodal affect recognition approach on 38 subjects with 11 HCI-related affect states. The extensive experimental results show that the average person-dependent affect recognition accuracy is almost 90% for our bimodal fusion.",
"title": ""
},
{
"docid": "8f1bcaed29644b80a623be8d26b81c20",
"text": "The GTZAN dataset appears in at least 100 published works, and is the most-used public dataset for evaluation in machine listening research for music genre recognition (MGR). Our recent work, however, shows GTZAN has several faults (repetitions, mislabelings, and distortions), which challenge the interpretability of any result derived using it. In this article, we disprove the claims that all MGR systems are affected in the same ways by these faults, and that the performances of MGR systems in GTZAN are still meaningfully comparable since they all face the same faults. We identify and analyze the contents of GTZAN, and provide a catalog of its faults. We review how GTZAN has been used in MGR research, and find few indications that its faults have been known and considered. Finally, we rigorously study the effects of its faults on evaluating five different MGR systems. The lesson is not to banish GTZAN, but to use it with consideration of its contents.",
"title": ""
},
{
"docid": "07575ce75d921d6af72674e1fe563ff7",
"text": "With a growing body of literature linking systems of high-performance work practices to organizational performance outcomes, recent research has pushed for examinations of the underlying mechanisms that enable this connection. In this study, based on a large sample of Welsh public-sector employees, we explored the role of several individual-level attitudinal factors--job satisfaction, organizational commitment, and psychological empowerment--as well as organizational citizenship behaviors that have the potential to provide insights into how human resource systems influence the performance of organizational units. The results support a unit-level path model, such that department-level, high-performance work system utilization is associated with enhanced levels of job satisfaction, organizational commitment, and psychological empowerment. In turn, these attitudinal variables were found to be positively linked to enhanced organizational citizenship behaviors, which are further related to a second-order construct measuring departmental performance.",
"title": ""
},
{
"docid": "0297af005c837e410272ab3152942f90",
"text": "Iris authentication is a popular method where persons are accurately authenticated. During authentication phase the features are extracted which are unique. Iris authentication uses IR images for authentication. This proposed work uses color iris images for authentication. Experiments are performed using ten different color models. This paper is focused on performance evaluation of color models used for color iris authentication. This proposed method is more reliable which cope up with different noises of color iris images. The experiments reveals the best selection of color model used for iris authentication. The proposed method is validated on UBIRIS noisy iris database. The results demonstrate that the accuracy is 92.1%, equal error rate of 0.072 and computational time is 0.039 seconds.",
"title": ""
},
{
"docid": "e1c927d7fbe826b741433c99fff868d0",
"text": "Multiclass maps are scatterplots, multidimensional projections, or thematic geographic maps where data points have a categorical attribute in addition to two quantitative attributes. This categorical attribute is often rendered using shape or color, which does not scale when overplotting occurs. When the number of data points increases, multiclass maps must resort to data aggregation to remain readable. We present multiclass density maps: multiple 2D histograms computed for each of the category values. Multiclass density maps are meant as a building block to improve the expressiveness and scalability of multiclass map visualization. In this article, we first present a short survey of aggregated multiclass maps, mainly from cartography. We then introduce a declarative model—a simple yet expressive JSON grammar associated with visual semantics—that specifies a wide design space of visualizations for multiclass density maps. Our declarative model is expressive and can be efficiently implemented in visualization front-ends such as modern web browsers. Furthermore, it can be reconfigured dynamically to support data exploration tasks without recomputing the raw data. Finally, we demonstrate how our model can be used to reproduce examples from the past and support exploring data at scale.",
"title": ""
},
{
"docid": "0c8b192807a6728be21e6a19902393c0",
"text": "The balance between facilitation and competition is likely to change with age due to the dynamic nature of nutrient, water and carbon cycles, and light availability during stand development. These processes have received attention in harsh, arid, semiarid and alpine ecosystems but are rarely examined in more productive communities, in mixed-species forest ecosystems or in long-term experiments spanning more than a decade. The aim of this study was to examine how inter- and intraspecific interactions between Eucalyptus globulus Labill. mixed with Acacia mearnsii de Wildeman trees changed with age and productivity in a field experiment in temperate south-eastern Australia. Spatially explicit neighbourhood indices were calculated to quantify tree interactions and used to develop growth models to examine how the tree interactions changed with time and stand productivity. Interspecific influences were usually less negative than intraspecific influences, and their difference increased with time for E. globulus and decreased with time for A. mearnsii. As a result, the growth advantages of being in a mixture increased with time for E. globulus and decreased with time for A. mearnsii. The growth advantage of being in a mixture also decreased for E. globulus with increasing stand productivity, showing that spatial as well as temporal dynamics in resource availability influenced the magnitude and direction of plant interactions.",
"title": ""
},
{
"docid": "41ac115647c421c44d7ef1600814dc3e",
"text": "PURPOSE\nThe bony skeleton serves as the scaffolding for the soft tissues of the face; however, age-related changes of bony morphology are not well defined. This study sought to compare the anatomic relationships of the facial skeleton and soft tissue structures between young and old men and women.\n\n\nMETHODS\nA retrospective review of CT scans of 100 consecutive patients imaged at Duke University Medical Center between 2004 and 2007 was performed using the Vitrea software package. The study population included 25 younger women (aged 18-30 years), 25 younger men, 25 older women (aged 55-65 years), and 25 older men. Using a standardized reference line, the distances from the anterior corneal plane to the superior orbital rim, lateral orbital rim, lower eyelid fat pad, inferior orbital rim, anterior cheek mass, and pyriform aperture were measured. Three-dimensional bony reconstructions were used to record the angular measurements of 4 bony regions: glabellar, orbital, maxillary, and pyriform aperture.\n\n\nRESULTS\nThe glabellar (p = 0.02), orbital (p = 0.0007), maxillary (p = 0.0001), and pyriform (p = 0.008) angles all decreased with age. The maxillary pyriform (p = 0.003) and infraorbital rim (p = 0.02) regressed with age. Anterior cheek mass became less prominent with age (p = 0.001), but the lower eyelid fat pad migrated anteriorly over time (p = 0.007).\n\n\nCONCLUSIONS\nThe facial skeleton appears to remodel throughout adulthood. Relative to the globe, the facial skeleton appears to rotate such that the frontal bone moves anteriorly and inferiorly while the maxilla moves posteriorly and superiorly. This rotation causes bony angles to become more acute and likely has an effect on the position of overlying soft tissues. These changes appear to be more dramatic in women.",
"title": ""
},
{
"docid": "041ca42d50e4cac92cf81c989a8527fb",
"text": "Helix antenna consists of a single conductor or multi-conductor open helix-shaped. Helix antenna has a three-dimensional shape. The shape of the helix antenna resembles a spring and the diameter and the distance between the windings of a certain size. This study aimed to design a signal amplifier wifi on 2.4 GHz. Materials used in the form of the pipe, copper wire, various connectors and wireless adapters and various other components. Mmmanagal describing simulation result on helix antenna. Further tested with wirelesmon software to test the wifi signal strength. The results are based Mmanagal, radiation patterns emitted achieve Ganin: 4.5 dBi horizontal polarization, F / B: −0,41dB; rear azimuth 1200 elevation 600, 2400 MHz, R27.9 and jX impedance −430.9, Elev: 64.40 real GND: 0.50 m height, and wifi signal strength increased from 47% to 55%.",
"title": ""
},
{
"docid": "857a2098e5eb48340699c6b7a29ec293",
"text": "Compressibiity of individuai sequences by the ciam of generaihd finite-atate information-losales encoders ia investigated These encodersrpnoperateinavariabie-ratemodeasweUasaflxedrateone,nnd they aiiow for any fhite-atate acheme of variabie-iength-to-variable-ien@ coding. For every individuai hfiite aeqence x a quantity p (x) ia defined, calledthecompressibilityofx,whirhisshowntobetheasymptotieatly attainable lower bound on the compression ratio tbat cao be achieved for x by any finite-state encoder. ‘flds is demonstrated by means of a amatructivecodtngtbeoremanditsconversethat,apartfnnntheirafymptotic significance, also provide useful performance criteria for finite and practicai data-compression taaka. The proposed concept of compressibility ia aiao shown to play a role analogous to that of entropy in ciaasicai informatfon theory where onedeaia with probabilistic ensembles of aequencea ratk Manuscript received June 10, 1977; revised February 20, 1978. J. Ziv is with Bell Laboratories, Murray Hill, NJ 07974, on leave from the Department of Electrical Engineering, Techmon-Israel Institute of Technology, Halfa, Israel. A. Lempel is with Sperry Research Center, Sudbury, MA 01776, on leave from the Department of Electrical Engineer@, Technion-Israel Institute of Technology, Haifa, Israel. tium with individuai sequences. Widie the delinition of p (x) aiiows a different machine for each different sequence to be compresse4 the constructive coding theorem ieada to a universal algorithm that is aaymik toticaiiy optfmai for au sequencea.",
"title": ""
}
] | scidocsrr |
5a7052cb7df7235f112f0d4f750339a0 | Exploring ROI size in deep learning based lipreading | [
{
"docid": "7fe3cf6b8110c324a98a90f31064dadb",
"text": "Traditional visual speech recognition systems consist of two stages, feature extraction and classification. Recently, several deep learning approaches have been presented which automatically extract features from the mouth images and aim to replace the feature extraction stage. However, research on joint learning of features and classification is very limited. In this work, we present an end-to-end visual speech recognition system based on Long-Short Memory (LSTM) networks. To the best of our knowledge, this is the first model which simultaneously learns to extract features directly from the pixels and perform classification and also achieves state-of-the-art performance in visual speech classification. The model consists of two streams which extract features directly from the mouth and difference images, respectively. The temporal dynamics in each stream are modelled by an LSTM and the fusion of the two streams takes place via a Bidirectional LSTM (BLSTM). An absolute improvement of 9.7% over the base line is reported on the OuluVS2 database, and 1.5% on the CUAVE database when compared with other methods which use a similar visual front-end.",
"title": ""
}
] | [
{
"docid": "335daed2a03f710d25e1e0a43c600453",
"text": "The Digital Bibliography and Library Project (DBLP) is a popular computer science bibliography website hosted at the University of Trier in Germany. It currently contains 2,722,212 computer science publications with additional information about the authors and conferences, journals, or books in which these are published. Although the database covers the majority of papers published in this field of research, it is still hard to browse the vast amount of textual data manually to find insights and correlations in it, in particular time-varying ones. This is also problematic if someone is merely interested in all papers of a specific topic and possible correlated scientific words which may hint at related papers. To close this gap, we propose an interactive tool which consists of two separate components, namely data analysis and data visualization. We show the benefits of our tool and explain how it might be used in a scenario where someone is confronted with the task of writing a state-of-the art report on a specific topic. We illustrate how data analysis, data visualization, and the human user supported by interaction features can work together to find insights which makes typical literature search tasks faster.",
"title": ""
},
{
"docid": "a601abae0a3d54d4aa3ecbb4bd09755a",
"text": "Article history: Received 27 March 2008 Received in revised form 2 September 2008 Accepted 20 October 2008",
"title": ""
},
{
"docid": "51fb43ac979ce0866eb541adc145ba70",
"text": "In many cooperatively breeding species, group members form a dominance hierarchy or queue to inherit the position of breeder. Models aimed at understanding individual variation in helping behavior, however, rarely take into account the effect of dominance rank on expected future reproductive success and thus the potential direct fitness costs of helping. Here we develop a kin-selection model of helping behavior in multimember groups in which only the highest ranking individual breeds. Each group member can invest in the dominant’s offspring at a cost to its own survivorship. The model predicts that lower ranked subordinates, who have a smaller probability of inheriting the group, should work harder than higher ranked subordinates. This prediction holds regardless of whether the intrinsic mortality rate of subordinates increases or decreases with rank. The prediction does not necessarily hold, however, where the costs of helping are higher for lower ranked individuals: a situation that may be common in vertebrates. The model makes two further testable predictions: that the helping effort of an individual of given rank should be lower in larger groups, and the reproductive success of dominants should be greater where group members are more closely related. Empirical evidence for these predictions is discussed. We argue that the effects of rank on stable helping effort may explain why attempts to correlate individual helping effort with relatedness in cooperatively breeding species have met with limited success.",
"title": ""
},
{
"docid": "e8b199733c0304731a60db7c42987cf6",
"text": "This ethnographic study of 22 diverse families in the San Francisco Bay Area provides a holistic account of parents' attitudes about their children's use of technology. We found that parents from different socioeconomic classes have different values and practices around technology use, and that those values and practices reflect structural differences in their everyday lives. Calling attention to class differences in technology use challenges the prevailing practice in human-computer interaction of designing for those similar to oneself, which often privileges middle-class values and practices. By discussing the differences between these two groups and the advantages of researching both, this research highlights the benefits of explicitly engaging with socioeconomic status as a category of analysis in design.",
"title": ""
},
{
"docid": "2ec9ac2c283fa0458eb97d1e359ec358",
"text": "Multiple automakers have in development or in production automated driving systems (ADS) that offer freeway-pilot functions. This type of ADS is typically limited to restricted-access freeways only, that is, the transition from manual to automated modes takes place only after the ramp merging process is completed manually. One major challenge to extend the automation to ramp merging is that the automated vehicle needs to incorporate and optimize long-term objectives (e.g. successful and smooth merge) when near-term actions must be safely executed. Moreover, the merging process involves interactions with other vehicles whose behaviors are sometimes hard to predict but may influence the merging vehicle's optimal actions. To tackle such a complicated control problem, we propose to apply Deep Reinforcement Learning (DRL) techniques for finding an optimal driving policy by maximizing the long-term reward in an interactive environment. Specifically, we apply a Long Short-Term Memory (LSTM) architecture to model the interactive environment, from which an internal state containing historical driving information is conveyed to a Deep Q-Network (DQN). The DQN is used to approximate the Q-function, which takes the internal state as input and generates Q-values as output for action selection. With this DRL architecture, the historical impact of interactive environment on the long-term reward can be captured and taken into account for deciding the optimal control policy. The proposed architecture has the potential to be extended and applied to other autonomous driving scenarios such as driving through a complex intersection or changing lanes under varying traffic flow conditions.",
"title": ""
},
{
"docid": "6566ad2c654274105e94f99ac5e20401",
"text": "This paper presents a universal morphological feature schema that represents the finest distinctions in meaning that are expressed by overt, affixal inflectional morphology across languages. This schema is used to universalize data extracted from Wiktionary via a robust multidimensional table parsing algorithm and feature mapping algorithms, yielding 883,965 instantiated paradigms in 352 languages. These data are shown to be effective for training morphological analyzers, yielding significant accuracy gains when applied to Durrett and DeNero’s (2013) paradigm learning framework.",
"title": ""
},
{
"docid": "405bae0d413aa4b5fef0ac8b8c639235",
"text": "Leukocyte adhesion deficiency (LAD) type III is a rare syndrome characterized by severe recurrent infections, leukocytosis, and increased bleeding tendency. All integrins are normally expressed yet a defect in their activation leads to the observed clinical manifestations. Less than 20 patients have been reported world wide and the primary genetic defect was identified in some of them. Here we describe the clinical features of patients in whom a mutation in the calcium and diacylglycerol-regulated guanine nucleotide exchange factor 1 (CalDAG GEF1) was found and compare them to other cases of LAD III and to animal models harboring a mutation in the CalDAG GEF1 gene. The hallmarks of the syndrome are recurrent infections accompanied by severe bleeding episodes distinguished by osteopetrosis like bone abnormalities and neurodevelopmental defects.",
"title": ""
},
{
"docid": "4a761bed54487cb9c34fc0ff27883944",
"text": "We show that unsupervised training of latent capsule layers using only the reconstruction loss, without masking to select the correct output class, causes a loss of equivariances and other desirable capsule qualities. This implies that supervised capsules networks can’t be very deep. Unsupervised sparsening of latent capsule layer activity both restores these qualities and appears to generalize better than supervised masking, while potentially enabling deeper capsules networks. We train a sparse, unsupervised capsules network of similar geometry to (Sabour et al., 2017) on MNIST (LeCun et al., 1998) and then test classification accuracy on affNIST1 using an SVM layer. Accuracy is improved from benchmark 79% to 90%.",
"title": ""
},
{
"docid": "c0762517ebbae00ab5ee1291460c164c",
"text": "This paper compares various topologies for 6.6kW on-board charger (OBC) to find out suitable topology. In general, OBC consists of 2-stage; power factor correction (PFC) stage and DC-DC converter stage. Conventional boost PFC, interleaved boost PFC, and semi bridgeless PFC are considered as PFC circuit, and full-bridge converter, phase shift full-bridge converter, and series resonant converter are taken into account for DC-DC converter circuit. The design process of each topology is presented. Then, loss analysis is implemented in order to calculate the efficiency of each topology for PFC circuit and DC-DC converter circuit. In addition, the volume of magnetic components and number of semi-conductor elements are considered. Based on these results, topology selection guideline according to the system specification of 6.6kW OBC is proposed.",
"title": ""
},
{
"docid": "12274a9b350f1d1f7a3eb0cd865f260c",
"text": "A large amount of multimedia data (e.g., image and video) is now available on the Web. A multimedia entity does not appear in isolation, but is accompanied by various forms of metadata, such as surrounding text, user tags, ratings, and comments etc. Mining these textual metadata has been found to be effective in facilitating multimedia information processing and management. A wealth of research efforts has been dedicated to text mining in multimedia. This chapter provides a comprehensive survey of recent research efforts. Specifically, the survey focuses on four aspects: (a) surrounding text mining; (b) tag mining; (c) joint text and visual content mining; and (d) cross text and visual content mining. Furthermore, open research issues are identified based on the current research efforts.",
"title": ""
},
{
"docid": "7f71e539817c80aaa0a4fe3b68d76948",
"text": "We propose to help weakly supervised object localization for classes where location annotations are not available, by transferring things and stuff knowledge from a source set with available annotations. The source and target classes might share similar appearance (e.g. bear fur is similar to cat fur) or appear against similar background (e.g. horse and sheep appear against grass). To exploit this, we acquire three types of knowledge from the source set: a segmentation model trained on both thing and stuff classes; similarity relations between target and source classes; and cooccurrence relations between thing and stuff classes in the source. The segmentation model is used to generate thing and stuff segmentation maps on a target image, while the class similarity and co-occurrence knowledge help refining them. We then incorporate these maps as new cues into a multiple instance learning framework (MIL), propagating the transferred knowledge from the pixel level to the object proposal level. In extensive experiments, we conduct our transfer from the PASCAL Context dataset (source) to the ILSVRC, COCO and PASCAL VOC 2007 datasets (targets). We evaluate our transfer across widely different thing classes, including some that are not similar in appearance, but appear against similar background. The results demonstrate significant improvement over standard MIL, and we outperform the state-of-the-art in the transfer setting.",
"title": ""
},
{
"docid": "a3585d424a54c31514aba579b80d8231",
"text": "The vast majority of today's critical infrastructure is supported by numerous feedback control loops and an attack on these control loops can have disastrous consequences. This is a major concern since modern control systems are becoming large and decentralized and thus more vulnerable to attacks. This paper is concerned with the estimation and control of linear systems when some of the sensors or actuators are corrupted by an attacker. We give a new simple characterization of the maximum number of attacks that can be detected and corrected as a function of the pair (A,C) of the system and we show in particular that it is impossible to accurately reconstruct the state of a system if more than half the sensors are attacked. In addition, we show how the design of a secure local control loop can improve the resilience of the system. When the number of attacks is smaller than a threshold, we propose an efficient algorithm inspired from techniques in compressed sensing to estimate the state of the plant despite attacks. We give a theoretical characterization of the performance of this algorithm and we show on numerical simulations that the method is promising and allows to reconstruct the state accurately despite attacks. Finally, we consider the problem of designing output-feedback controllers that stabilize the system despite sensor attacks. We show that a principle of separation between estimation and control holds and that the design of resilient output feedback controllers can be reduced to the design of resilient state estimators.",
"title": ""
},
{
"docid": "07941e1f7a8fd0bbc678b641b80dc037",
"text": "This contribution presents a very brief and critical discussion on automated machine learning (AutoML), which is categorized here into two classes, referred to as narrow AutoML and generalized AutoML, respectively. The conclusions yielded from this discussion can be summarized as follows: (1) most existent research on AutoML belongs to the class of narrow AutoML; (2) advances in narrow AutoML are mainly motivated by commercial needs, while any possible benefit obtained is definitely at a cost of increase in computing burdens; (3)the concept of generalized AutoML has a strong tie in spirit with artificial general intelligence (AGI), also called “strong AI”, for which obstacles abound for obtaining pivotal progresses.",
"title": ""
},
{
"docid": "ff20e5cd554cd628eba07776fa9a5853",
"text": "We describe our early experience in applying our console log mining techniques [19, 20] to logs from production Google systems with thousands of nodes. This data set is five orders of magnitude in size and contains almost 20 times as many messages types as the Hadoop data set we used in [19]. It also has many properties that are unique to large scale production deployments (e.g., the system stays on for several months and multiple versions of the software can run concurrently). Our early experience shows that our techniques, including source code based log parsing, state and sequence based feature creation and problem detection, work well on this production data set. We also discuss our experience in using our log parser to assist the log sanitization.",
"title": ""
},
{
"docid": "8fe6e954db9080e233bbc6dbf8117914",
"text": "This document defines a deterministic digital signature generation procedure. Such signatures are compatible with standard Digital Signature Algorithm (DSA) and Elliptic Curve Digital Signature Algorithm (ECDSA) digital signatures and can be processed with unmodified verifiers, which need not be aware of the procedure described therein. Deterministic signatures retain the cryptographic security features associated with digital signatures but can be more easily implemented in various environments, since they do not need access to a source of high-quality randomness. Status of This Memo This document is not an Internet Standards Track specification; it is published for informational purposes. This is a contribution to the RFC Series, independently of any other RFC stream. The RFC Editor has chosen to publish this document at its discretion and makes no statement about its value for implementation or deployment. Documents approved for publication by the RFC Editor are not a candidate for any level of Internet Standard; see Section 2 of RFC 5741. Information about the current status of this document, any errata, and how to provide feedback on it may be obtained at in effect on the date of publication of this document. Please review these documents carefully, as they describe your rights and restrictions with respect to this document.",
"title": ""
},
{
"docid": "04f705462bdd34a8d82340fb59264a51",
"text": "This paper describes EmoTweet-28, a carefully curated corpus of 15,553 tweets annotated with 28 emotion categories for the purpose of training and evaluating machine learning models for emotion classification. EmoTweet-28 is, to date, the largest tweet corpus annotated with fine-grained emotion categories. The corpus contains annotations for four facets of emotion: valence, arousal, emotion category and emotion cues. We first used small-scale content analysis to inductively identify a set of emotion categories that characterize the emotions expressed in microblog text. We then expanded the size of the corpus using crowdsourcing. The corpus encompasses a variety of examples including explicit and implicit expressions of emotions as well as tweets containing multiple emotions. EmoTweet-28 represents an important resource to advance the development and evaluation of more emotion-sensitive systems.",
"title": ""
},
{
"docid": "0a3f5ff37c49840ec8e59cbc56d31be2",
"text": "Convolutional neural networks (CNNs) are well known for producing state-of-the-art recognizers for document processing [1]. However, they can be difficult to implement and are usually slower than traditional multi-layer perceptrons (MLPs). We present three novel approaches to speeding up CNNs: a) unrolling convolution, b) using BLAS (basic linear algebra subroutines), and c) using GPUs (graphic processing units). Unrolled convolution converts the processing in each convolutional layer (both forward-propagation and back-propagation) into a matrix-matrix product. The matrix-matrix product representation of CNNs makes their implementation as easy as MLPs. BLAS is used to efficiently compute matrix products on the CPU. We also present a pixel shader based GPU implementation of CNNs. Results on character recognition problems indicate that unrolled convolution with BLAS produces a dramatic 2.4X−3.0X speedup. The GPU implementation is even faster and produces a 3.1X−4.1X speedup.",
"title": ""
},
{
"docid": "f733b53147ce1765709acfcba52c8bbf",
"text": "BACKGROUND\nIt is important to evaluate the impact of cannabis use on onset and course of psychotic illness, as the increasing number of novice cannabis users may translate into a greater public health burden. This study aims to examine the relationship between adolescent onset of regular marijuana use and age of onset of prodromal symptoms, or first episode psychosis, and the manifestation of psychotic symptoms in those adolescents who use cannabis regularly.\n\n\nMETHODS\nA review was conducted of the current literature for youth who initiated cannabis use prior to the age of 18 and experienced psychotic symptoms at, or prior to, the age of 25. Seventeen studies met eligibility criteria and were included in this review.\n\n\nRESULTS\nThe current weight of evidence supports the hypothesis that early initiation of cannabis use increases the risk of early onset psychotic disorder, especially for those with a preexisting vulnerability and who have greater severity of use. There is also a dose-response association between cannabis use and symptoms, such that those who use more tend to experience greater number and severity of prodromal and diagnostic psychotic symptoms. Those with early-onset psychotic disorder and comorbid cannabis use show a poorer course of illness in regards to psychotic symptoms, treatment, and functional outcomes. However, those with early initiation of cannabis use appear to show a higher level of social functioning than non-cannabis users.\n\n\nCONCLUSIONS\nAdolescent initiation of cannabis use is associated, in a dose-dependent fashion, with emergence and severity of psychotic symptoms and functional impairment such that those who initiate use earlier and use at higher frequencies demonstrate poorer illness and treatment outcomes. These associations appear more robust for adolescents at high risk for developing a psychotic disorder.",
"title": ""
},
{
"docid": "f59adaac85f7131bf14335dad2337568",
"text": "Product search is an important part of online shopping. In contrast to many search tasks, the objectives of product search are not confined to retrieving relevant products. Instead, it focuses on finding items that satisfy the needs of individuals and lead to a user purchase. The unique characteristics of product search make search personalization essential for both customers and e-shopping companies. Purchase behavior is highly personal in online shopping and users often provide rich feedback about their decisions (e.g. product reviews). However, the severe mismatch found in the language of queries, products and users make traditional retrieval models based on bag-of-words assumptions less suitable for personalization in product search. In this paper, we propose a hierarchical embedding model to learn semantic representations for entities (i.e. words, products, users and queries) from different levels with their associated language data. Our contributions are three-fold: (1) our work is one of the initial studies on personalized product search; (2) our hierarchical embedding model is the first latent space model that jointly learns distributed representations for queries, products and users with a deep neural network; (3) each component of our network is designed as a generative model so that the whole structure is explainable and extendable. Following the methodology of previous studies, we constructed personalized product search benchmarks with Amazon product data. Experiments show that our hierarchical embedding model significantly outperforms existing product search baselines on multiple benchmark datasets.",
"title": ""
}
] | scidocsrr |
edc8871e7c4dd6ee0caf1ee083242a3a | BotMosaic: Collaborative Network Watermark for Botnet Detection | [
{
"docid": "e77b339a245fc09111d7c9033db7a884",
"text": "Botnets are now recognized as one of the most serious security threats. In contrast to previous malware, botnets have the characteristic of a command and control (C&C) channel. Botnets also often use existing common protocols, e.g., IRC, HTTP, and in protocol-conforming manners. This makes the detection of botnet C&C a challenging problem. In this paper, we propose an approach that uses network-based anomaly detection to identify botnet C&C channels in a local area network without any prior knowledge of signatures or C&C server addresses. This detection approach can identify both the C&C servers and infected hosts in the network. Our approach is based on the observation that, because of the pre-programmed activities related to C&C, bots within the same botnet will likely demonstrate spatial-temporal correlation and similarity. For example, they engage in coordinated communication, propagation, and attack and fraudulent activities. Our prototype system, BotSniffer, can capture this spatial-temporal correlation in network traffic and utilize statistical algorithms to detect botnets with theoretical bounds on the false positive and false negative rates. We evaluated BotSniffer using many real-world network traces. The results show that BotSniffer can detect real-world botnets with high accuracy and has a very low false positive rate.",
"title": ""
}
] | [
{
"docid": "932934a4362bd671427954d0afb61459",
"text": "On the basis of the similarity between spinel and rocksalt structures, it is shown that some spinel oxides (e.g., MgCo2O4, etc) can be cathode materials for Mg rechargeable batteries around 150 °C. The Mg insertion into spinel lattices occurs via \"intercalation and push-out\" process to form a rocksalt phase in the spinel mother phase. For example, by utilizing the valence change from Co(III) to Co(II) in MgCo2O4, Mg insertion occurs at a considerably high potential of about 2.9 V vs. Mg2+/Mg, and similarly it occurs around 2.3 V vs. Mg2+/Mg with the valence change from Mn(III) to Mn(II) in MgMn2O4, being comparable to the ab initio calculation. The feasibility of Mg insertion would depend on the phase stability of the counterpart rocksalt XO of MgO in Mg2X2O4 or MgX3O4 (X = Co, Fe, Mn, and Cr). In addition, the normal spinel MgMn2O4 and MgCr2O4 can be demagnesiated to some extent owing to the robust host structure of Mg1-xX2O4, where the Mg extraction/insertion potentials for MgMn2O4 and MgCr2O4 are both about 3.4 V vs. Mg2+/Mg. Especially, the former \"intercalation and push-out\" process would provide a safe and stable design of cathode materials for polyvalent cations.",
"title": ""
},
{
"docid": "a24f958c480812feb338b651849037b2",
"text": "This paper investigates the detection and classification of fighting and pre and post fighting events when viewed from a video camera. Specifically we investigate normal, pre, post and actual fighting sequences and classify them. A hierarchical AdaBoost classifier is described and results using this approach are presented. We show it is possible to classify pre-fighting situations using such an approach and demonstrate how it can be used in the general case of continuous sequences.",
"title": ""
},
{
"docid": "00b85bd052a196b1f02d00f6ad532ed2",
"text": "The book Build Your Own Database Driven Website Using PHP & MySQL by Kevin Yank provides a hands-on look at what's involved in building a database-driven Web site. The author does a good job of patiently teaching the reader how to install and configure PHP 5 and MySQL to organize dynamic Web pages and put together a viable content management system. At just over 350 pages, the book is rather small compared to a lot of others on the topic, but it contains all the essentials. The author employs excellent teaching techniques to set up the foundation stone by stone and then grouts everything solidly together later in the book. This book aims at intermediate and advanced Web designers looking to make the leap to server-side programming. The author assumes his readers are comfortable with simple HTML. He provides an excellent introduction to PHP and MySQL (including installation) and explains how to make them work together. The amount of material he covers guarantees that almost any reader will benefit.",
"title": ""
},
{
"docid": "6ac996c20f036308f36c7b667babe876",
"text": "Patents are a very useful source of technical information. The public availability of patents over the Internet, with for some databases (eg. Espacenet) the assurance of a constant format, allows the development of high value added products using this information source and provides an easy way to analyze patent information. This simple and powerful tool facilitates the use of patents in academic research, in SMEs and in developing countries providing a way to use patents as a ideas resource thus improving technological innovation.",
"title": ""
},
{
"docid": "1289f47ea43ddd72fc90977b0a538d1c",
"text": "This study identifies evaluative, attitudinal, and behavioral factors that enhance or reduce the likelihood of consumers aborting intended online transactions (transaction abort likelihood). Path analyses show that risk perceptions associated with eshopping have direct influence on the transaction abort likelihood, whereas benefit perceptions do not. In addition, consumers who have favorable attitudes toward e-shopping, purchasing experiences from the Internet, and high purchasing frequencies from catalogs are less likely to abort intended transactions. The results also show that attitude toward e-shopping mediate relationships between the transaction abort likelihood and other predictors (i.e., effort saving, product offering, control in the information search, and time spent on the Internet per visit). # 2003 Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "c052f693b65a0f3189fc1e9f4df11162",
"text": "In this paper we present ElastiFace, a simple and versatile method for establishing correspondence between textured face models, either for the construction of a blend-shape facial rig or for the exploration of new characters by morphing between a set of input models. While there exists a wide variety of approaches for inter-surface mapping and mesh morphing, most techniques are not suitable for our application: They either require the insertion of additional vertices, are limited to topological planes or spheres, are restricted to near-isometric input meshes, and/or are algorithmically and computationally involved. In contrast, our method extends linear non-rigid registration techniques to allow for strongly varying input geometries. It is geometrically intuitive, simple to implement, computationally efficient, and robustly handles highly non-isometric input models. In order to match the requirements of other applications, such as recent perception studies, we further extend our geometric matching to the matching of input textures and morphing of geometries and rendering styles.",
"title": ""
},
{
"docid": "071b46c04389b6fe3830989a31991d0d",
"text": "Direct slicing of CAD models to generate process planning instructions for solid freeform fabrication may overcome inherent disadvantages of using stereolithography format in terms of the process accuracy, ease of file management, and incorporation of multiple materials. This paper will present the results of our development of a direct slicing algorithm for layered freeform fabrication. The direct slicing algorithm was based on a neutral, international standard (ISO 10303) STEP-formatted non-uniform rational B-spline (NURBS) geometric representation and is intended to be independent of any commercial CAD software. The following aspects of the development effort will be presented: (1) determination of optimal build direction based upon STEP-based NURBS models; (2) adaptive subdivision of NURBS data for geometric refinement; and (3) ray-casting slice generation into sets of raster patterns. The development also provides for multi-material slicing and will provide an effective tool in heterogeneous slicing processes. q 2004 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "d38db185d37fa96795e640d918a8dfe8",
"text": "Learning behaviour of artificial agents is commonly studied in the framework of Reinforcement Learning. Reinforcement Learning gained increasing popularity in the past years. This is partially due to developments that enabled the possibility to employ complex function approximators, such as deep networks, in combination with the framework. Two of the core challenges in Reinforcement Learning are the correct assignment of credits over long periods of time and dealing with sparse rewards. In this thesis we propose a framework based on the notions of goals to tackle these problems. This work implements several components required to obtain a form of goal-directed behaviour, similar to how it is observed in human reasoning. This includes the representation of a goal space, learning how to set goals and finally how to reach them. The framework itself is build upon the options model, which is a common approach for representing temporally extended actions in Reinforcement Learning. All components of the proposed method can be implemented as deep networks and the complete system can be learned in an end-to-end fashion using standard optimization techniques. We evaluate the approach on a set of continuous control problems of increasing difficulty. We show, that we are able to solve a difficult gathering task, which poses a challenge to state-of-the-art Reinforcement Learning algorithms. The presented approach is furthermore able to scale to complex kinematic agents of the MuJoCo benchmark.",
"title": ""
},
{
"docid": "714c06da1a728663afd8dbb1cd2d472d",
"text": "This paper proposes hybrid semiMarkov conditional random fields (SCRFs) for neural sequence labeling in natural language processing. Based on conventional conditional random fields (CRFs), SCRFs have been designed for the tasks of assigning labels to segments by extracting features from and describing transitions between segments instead of words. In this paper, we improve the existing SCRF methods by employing word-level and segment-level information simultaneously. First, word-level labels are utilized to derive the segment scores in SCRFs. Second, a CRF output layer and an SCRF output layer are integrated into an unified neural network and trained jointly. Experimental results on CoNLL 2003 named entity recognition (NER) shared task show that our model achieves state-of-the-art performance when no external knowledge is used.",
"title": ""
},
{
"docid": "2cf7921cce2b3077c59d9e4e2ab13afe",
"text": "Scientists and consumers preference focused on natural colorants due to the emergence of negative health effects of synthetic colorants which is used for many years in foods. Interest in natural colorants is increasing with each passing day as a consequence of their antimicrobial and antioxidant effects. The biggest obstacle in promotion of natural colorants as food pigment agents is that it requires high investment. For this reason, the R&D studies related issues are shifted to processes to reduce cost and it is directed to pigment production from microorganisms with fermentation. Nowadays, there is pigments obtained by commercially microorganisms or plants with fermantation. These pigments can be use for both food colorant and food supplement. In this review, besides colourant and antioxidant properties, antimicrobial properties of natural colorants are discussed.",
"title": ""
},
{
"docid": "83f067159913e65410a054681461ab4d",
"text": "Cloud computing has revolutionized the way computing and software services are delivered to the clients on demand. It offers users the ability to connect to computing resources and access IT managed services with a previously unknown level of ease. Due to this greater level of flexibility, the cloud has become the breeding ground of a new generation of products and services. However, the flexibility of cloud-based services comes with the risk of the security and privacy of users' data. Thus, security concerns among users of the cloud have become a major barrier to the widespread growth of cloud computing. One of the security concerns of cloud is data mining based privacy attacks that involve analyzing data over a long period to extract valuable information. In particular, in current cloud architecture a client entrusts a single cloud provider with his data. It gives the provider and outside attackers having unauthorized access to cloud, an opportunity of analyzing client data over a long period to extract sensitive information that causes privacy violation of clients. This is a big concern for many clients of cloud. In this paper, we first identify the data mining based privacy risks on cloud data and propose a distributed architecture to eliminate the risks.",
"title": ""
},
{
"docid": "59a1088003576f2e75cdbedc24ae8bdf",
"text": "Two literatures or sets of articles are complementary if, considered together, they can reveal useful information of scientik interest not apparent in either of the two sets alone. Of particular interest are complementary literatures that are also mutually isolated and noninteractive (they do not cite each other and are not co-cited). In that case, the intriguing possibility akrae that thm &tfnrmnt;nn n&wd hv mwnhXno them 4. nnvnl Lyww u-c “‘1 YLL”I&.L.sU”4L 6uy’“s. u, b..S..“Y.Ayj .a.-** Y ..u. -... During the past decade, we have identified seven examples of complementary noninteractive structures in the biomedical literature. Each structure led to a novel, plausible, and testable hypothesis that, in several cases, was subsequently corroborated by medical researchers through clinical or laboratory investigation. We have also developed, tested, and described a systematic, computer-sided approach to iinding and identifying complementary noninteractive literatures. Specialization, Fragmentation, and a Connection Explosion By some obscure spontaneous process scientists have responded to the growth of science by organizing their work into soecialties, thus permitting each individual to -r-~ focus on a small part of the total literature. Specialties that grow too large tend to divide into subspecialties that have their own literatures which, by a process of repeated splitting, maintain more or less fixed and manageable size. As the total literature grows, the number of specialties, but not in general the size of each, increases (Kochen, 1963; Swanson, 199Oc). But the unintended consequence of specialization is fragmentation. By dividing up the pie, the potential relationships among its pieces tend to be neglected. Although scientific literature cannot, in the long run, grow disproportionately to the growth of the communities and resources that produce it, combinations of implicitlyrelated segments of literature can grow much faster than the literature itself and can readily exceed the capacity of the community to identify and assimilate such relatedness (Swanson, 1993). The signilicance of the “information explosion” thus may lie not in an explosion of quantity per se, but in an incalculably greater combinatorial explosion of unnoticed and unintended logical connections. The Significance of Complementary Noninteractive Literatures If two literatures each of substantial size are linked by arguments that they respectively put forward -that is, are “logically” related, or complementary -one would expect to gain usefui information by combining them. For example, suppose that one (biomedical) literature establishes that some environmental factor A influences certain internal physiological conditions and a second literature establishes that these same physiological changes influence the course of disease C. Presumably, then, anyone who reads both literatures could conclude that factor A might influence disease C. Under such --->!L---f -----l-----ry-?r-. ----.---,a ?1-_----_I rl-conamons or comptementdnty one woum dtso expect me two literatures to refer to each other. If, however, the two literatures were developed independently of one another, the logical l inkage illustrated may be both unintended and unnoticed. To detect such mutual isolation, we examine the citation pattern. If two literatures are “noninteractive” that ir if thmv hnvm n~.rer fnr odAnm\\ kppn &ml = ulyc 1U) a. “W, na6L.V ..Y.“. ,“a vva&“..n] “W.. UluIu together, and if neither cites the other, then it is possible that scientists have not previously considered both iiteratures together, and so it is possible that no one is aware of the implicit A-C connection. The two conditions, complementarily and noninteraction, describe a model structure that shows how useful information can remain undiscovered even though its components consist of public knowledge (Swanson, 1987,199l). Public Knowledge / Private Knowledge There is, of course, no way to know in any particular case whether the possibility of an AC relationship in the above model has or has not occurred to someone, or whether or not anyone has actually considered the two literatures on A and C together, a private matter that necessarily remains conjectural. However, our argument is based only on determining whether there is any printed evidence to the contrary. We are concerned with public rather than Data Mining: Integration Q Application 295 From: KDD-96 Proceedings. Copyright © 1996, AAAI (www.aaai.org). All rights reserved. private knowledge -with the state of the record produced rather than the state of mind of the producers (Swanson, 1990d). The point of bringing together the AB and BC literatures, in any event, is not to \"prove\" an AC linkage, (by considering only transitive relationships) but rather call attention to an apparently unnoticed association that may be worth investigating. In principle any chain of scientific, including analogic, reasoning in which different links appear in noninteractive literatures may lead to the discovery of new interesting connections. \"What people know\" is a common u derstanding of what is meant by \"knowledge\". If taken in this subjective sense, the idea of \"knowledge discovery\" could mean merely that someone discovered something they hadn’t known before. Our focus in the present paper is on a second sense of the word \"knowledge\", a meaning associated with the products of human i tellectual activity, as encoded in the public record, rather than with the contents of the human mind. This abstract world of human-created \"objective\" knowledge is open to exploration and discovery, for it can contain territory that is subjectively unknown to anyone (Popper, 1972). Our work is directed toward the discovery of scientificallyuseful information implicit in the public record, but not previously made xplicit. The problem we address concerns structures within the scientific literature, not within the mind. The Process of Finding Complementary Noninteractive Literatures During the past ten years, we have pursued three goals: i) to show in principle how new knowledge might be gained by synthesizing logicallyrelated noninteractive literatures; ii) to demonstrate that such structures do exist, at least within the biomedical literature; and iii) to develop a systematic process for finding them. In pursuit of goal iii, we have created interactive software and database arch strategies that can facilitate the discovery of complementary st uctures in the published literature of science. The universe or searchspace under consideration is limited only by the coverage of the major scientific databases, though we have focused primarily on the biomedical field and the MEDLINE database (8 million records). In 1991, a systematic approach to finding complementary structures was outlined and became a point of departure for software development (Swanson, 1991). The system that has now taken shape is based on a 3-way interaction between computer software, bibliographic databases, and a human operator. Tae interaction generates information structtues that are used heuristically to guide the search for promising complementary literatures. The user of the system begins by choosing a question 296 Technology Spotlight or problem area of scientific interest that can be associated with a literature, C. Elsewhere we describe and evaluate experimental computer software, which we call ARROWSMITH (Swanson & Smalheiser, 1997), that performs two separate functions that can be used independently. The first function produces a list of candidates for a second literature, A, complementary o C, from which the user can select one candidate (at a time) an input, along with C, to the second function. This first function can be considered as a computer-assisted process of problem-discovery, an issue identified in the AI literature (Langley, et al., 1987; p304-307). Alternatively, the user may wish to identify a second literature, A, as a conjecture or hypothesis generated independently of the computer-produced list of candidates. Our approach has been based on the use of article titles as a guide to identifying complementary literatures. As indicated above, our point of departure for the second function is a tentative scientific hypothesis associated with two literalxtres, A and C. A title-word search of MEDLINE is used to create two local computer title-files associated with A and C, respectively. These files are used as input to the ARROWSMITH software, which then produces a list of all words common to the two sets of titles, except for words excluded by an extensive stoplist (presently about 5000 words). The resulting list of words provides the basis for identifying title-word pathways that might provide clues to the presence of complementary arguments within the literatures corresponding to A and C. The output of this procedure is a structured titledisplay (plus journal citation), that serves as a heuristic aid to identifying word-linked titles and serves also as an organized guide to the literature.",
"title": ""
},
{
"docid": "28f8be68a0fe4762af272a0e11d53f7d",
"text": "In this article, we address the cross-domain (i.e., street and shop) clothing retrieval problem and investigate its real-world applications for online clothing shopping. It is a challenging problem due to the large discrepancy between street and shop domain images. We focus on learning an effective feature-embedding model to generate robust and discriminative feature representation across domains. Existing triplet embedding models achieve promising results by finding an embedding metric in which the distance between negative pairs is larger than the distance between positive pairs plus a margin. However, existing methods do not address the challenges in the cross-domain clothing retrieval scenario sufficiently. First, the intradomain and cross-domain data relationships need to be considered simultaneously. Second, the number of matched and nonmatched cross-domain pairs are unbalanced. To address these challenges, we propose a deep cross-triplet embedding algorithm together with a cross-triplet sampling strategy. The extensive experimental evaluations demonstrate the effectiveness of the proposed algorithms well. Furthermore, we investigate two novel online shopping applications, clothing trying on and accessories recommendation, based on a unified cross-domain clothing retrieval framework.",
"title": ""
},
{
"docid": "aab83f305b6519c091f883d869a0b92c",
"text": "With the development of the web of data, recent statistical, data-to-text generation approaches have focused on mapping data (e.g., database records or knowledge-base (KB) triples) to natural language. In contrast to previous grammar-based approaches, this more recent work systematically eschews syntax and learns a direct mapping between meaning representations and natural language. By contrast, I argue that an explicit model of syntax can help support NLG in several ways. Based on case studies drawn from KB-to-text generation, I show that syntax can be used to support supervised training with little training data; to ensure domain portability; and to improve statistical hypertagging.",
"title": ""
},
{
"docid": "ca4aa2c6f4096bbffaa2e3e1dd06fbe8",
"text": "Hybrid unmanned aircraft, that combine hover capability with a wing for fast and efficient forward flight, have attracted a lot of attention in recent years. Many different designs are proposed, but one of the most promising is the tailsitter concept. However, tailsitters are difficult to control across the entire flight envelope, which often includes stalled flight. Additionally, their wing surface makes them susceptible to wind gusts. In this paper, we propose incremental nonlinear dynamic inversion control for the attitude and position control. The result is a single, continuous controller, that is able to track the acceleration of the vehicle across the flight envelope. The proposed controller is implemented on the Cyclone hybrid UAV. Multiple outdoor experiments are performed, showing that unmodeled forces and moments are effectively compensated by the incremental control structure, and that accelerations can be tracked across the flight envelope. Finally, we provide a comprehensive procedure for the implementation of the controller on other types of hybrid UAVs.",
"title": ""
},
{
"docid": "9058505c04c1dc7c33603fd8347312a0",
"text": "Fear appeals are a polarizing issue, with proponents confident in their efficacy and opponents confident that they backfire. We present the results of a comprehensive meta-analysis investigating fear appeals' effectiveness for influencing attitudes, intentions, and behaviors. We tested predictions from a large number of theories, the majority of which have never been tested meta-analytically until now. Studies were included if they contained a treatment group exposed to a fear appeal, a valid comparison group, a manipulation of depicted fear, a measure of attitudes, intentions, or behaviors concerning the targeted risk or recommended solution, and adequate statistics to calculate effect sizes. The meta-analysis included 127 articles (9% unpublished) yielding 248 independent samples (NTotal = 27,372) collected from diverse populations. Results showed a positive effect of fear appeals on attitudes, intentions, and behaviors, with the average effect on a composite index being random-effects d = 0.29. Moderation analyses based on prominent fear appeal theories showed that the effectiveness of fear appeals increased when the message included efficacy statements, depicted high susceptibility and severity, recommended one-time only (vs. repeated) behaviors, and targeted audiences that included a larger percentage of female message recipients. Overall, we conclude that (a) fear appeals are effective at positively influencing attitude, intentions, and behaviors; (b) there are very few circumstances under which they are not effective; and (c) there are no identified circumstances under which they backfire and lead to undesirable outcomes.",
"title": ""
},
{
"docid": "4a2de9235a698a3b5e517446088d2ac6",
"text": "In recent years, there has been a growing interest in designing multi-robot systems (hereafter MRSs) to provide cost effective, fault-tolerant and reliable solutions to a variety of automated applications. Here, we review recent advancements in MRSs specifically designed for cooperative object transport, which requires the members of MRSs to coordinate their actions to transport objects from a starting position to a final destination. To achieve cooperative object transport, a wide range of transport, coordination and control strategies have been proposed. Our goal is to provide a comprehensive summary for this relatively heterogeneous and fast-growing body of scientific literature. While distilling the information, we purposefully avoid using hierarchical dichotomies, which have been traditionally used in the field of MRSs. Instead, we employ a coarse-grain approach by classifying each study based on the transport strategy used; pushing-only, grasping and caging. We identify key design constraints that may be shared among these studies despite considerable differences in their design methods. In the end, we discuss several open challenges and possible directions for future work to improve the performance of the current MRSs. Overall, we hope to increasethe visibility and accessibility of the excellent studies in the field and provide a framework that helps the reader to navigate through them more effectively.",
"title": ""
},
{
"docid": "4fcea2e99877dedc419893313c1baea4",
"text": "A cardiac circumstance affected through irregular electrical action of the heart is called an arrhythmia. A noninvasive method called Electrocardiogram (ECG) is used to diagnosis arrhythmias or irregularities of the heart. The difficulty encountered by doctors in the analysis of heartbeat irregularities id due to the non-stationary of ECG signal, the existence of noise and the abnormality of the heartbeat. The computer-assisted study of ECG signal supports doctors to diagnoses diseases of cardiovascular. The major limitations of all the ECG signal analysis of arrhythmia detection are because to the non-stationary behavior of the ECG signals and unobserved information existent in the ECG signals. In addition, detection based on Extreme learning machine (ELM) has become a common technique in machine learning. However, it easily suffers from overfitting. This paper proposes a hybrid classification technique using Bayesian and Extreme Learning Machine (B-ELM) technique for heartbeat recognition of arrhythmia detection AD. The proposed technique is capable of detecting arrhythmia classes with a maximum accuracy of (98.09%) and less computational time about 2.5s.",
"title": ""
},
{
"docid": "9e8d4b422a7ed05ee338fcd426dab723",
"text": "Entity typing is an essential task for constructing a knowledge base. However, many non-English knowledge bases fail to type their entities due to the absence of a reasonable local hierarchical taxonomy. Since constructing a widely accepted taxonomy is a hard problem, we propose to type these non-English entities with some widely accepted taxonomies in English, such as DBpedia, Yago and Freebase. We define this problem as cross-lingual type inference. In this paper, we present CUTE to type Chinese entities with DBpedia types. First we exploit the cross-lingual entity linking between Chinese and English entities to construct the training data. Then we propose a multi-label hierarchical classification algorithm to type these Chinese entities. Experimental results show the effectiveness and efficiency of our method.",
"title": ""
},
{
"docid": "68288cbb20c43b2f1911d6264cc81a6c",
"text": "Folliculitis decalvans is an inflammatory presentation of cicatrizing alopecia characterized by inflammatory perifollicular papules and pustules. It generally occurs in adult males, predominantly involving the vertex and occipital areas of the scalp. The use of dermatoscopy in hair and scalp diseases improves diagnostic accuracy. Some trichoscopic findings, such as follicular tufts, perifollicular erythema, crusts and pustules, can be observed in folliculitis decalvans. More research on the pathogenesis and treatment options of this disfiguring disease is required for improving patient management.",
"title": ""
}
] | scidocsrr |
366061cc202731f6c17afeb18d38db19 | The DSM diagnostic criteria for gender identity disorder in adolescents and adults. | [
{
"docid": "e61d7b44a39c5cc3a77b674b2934ba40",
"text": "The sexual behaviors and attitudes of male-to-female (MtF) transsexuals have not been investigated systematically. This study presents information about sexuality before and after sex reassignment surgery (SRS), as reported by 232 MtF patients of one surgeon. Data were collected using self-administered questionnaires. The mean age of participants at time of SRS was 44 years (range, 18-70 years). Before SRS, 54% of participants had been predominantly attracted to women and 9% had been predominantly attracted to men. After SRS, these figures were 25% and 34%, respectively.Participants' median numbers of sexual partners before SRS and in the last 12 months after SRS were 6 and 1, respectively. Participants' reported number of sexual partners before SRS was similar to the number of partners reported by male participants in the National Health and Social Life Survey (NHSLS). After SRS, 32% of participants reported no sexual partners in the last 12 months, higher than reported by male or female participants in the NHSLS. Bisexual participants reported more partners before and after SRS than did other participants. 49% of participants reported hundreds of episodes or more of sexual arousal to cross-dressing or cross-gender fantasy (autogynephilia) before SRS; after SRS, only 3% so reported. More frequent autogynephilic arousal after SRS was correlated with more frequent masturbation, a larger number of sexual partners, and more frequent partnered sexual activity. 85% of participants experienced orgasm at least occasionally after SRS and 55% ejaculated with orgasm.",
"title": ""
},
{
"docid": "a4a15096e116a6afc2730d1693b1c34f",
"text": "The present study reports on the construction of a dimensional measure of gender identity (gender dysphoria) for adolescents and adults. The 27-item gender identity/gender dysphoria questionnaire for adolescents and adults (GIDYQ-AA) was administered to 389 university students (heterosexual and nonheterosexual) and 73 clinic-referred patients with gender identity disorder. Principal axis factor analysis indicated that a one-factor solution, accounting for 61.3% of the total variance, best fits the data. Factor loadings were all >or= .30 (median, .82; range, .34-.96). A mean total score (Cronbach's alpha, .97) was computed, which showed strong evidence for discriminant validity in that the gender identity patients had significantly more gender dysphoria than both the heterosexual and nonheterosexual university students. Using a cut-point of 3.00, we found the sensitivity was 90.4% for the gender identity patients and specificity was 99.7% for the controls. The utility of the GIDYQ-AA is discussed.",
"title": ""
}
] | [
{
"docid": "1f629796e9180c14668e28b83dc30675",
"text": "In this article we tackle the issue of searchable encryption with a generalized query model. Departing from many previous works that focused on queries consisting of a single keyword, we consider the the case of queries consisting of arbitrary boolean expressions on keywords, that is to say conjunctions and disjunctions of keywords and their complement. Our construction of boolean symmetric searchable encryption BSSE is mainly based on the orthogonalization of the keyword field according to the Gram-Schmidt process. Each document stored in an outsourced server is associated with a label which contains all the keywords corresponding to the document, and searches are performed by way of a simple inner product. Furthermore, the queries in the BSSE scheme are randomized. This randomization hides the search pattern of the user since the search results cannot be associated deterministically to queries. We formally define an adaptive security model for the BSSE scheme. In addition, the search complexity is in $O(n)$ where $n$ is the number of documents stored in the outsourced server.",
"title": ""
},
{
"docid": "98aec0805e83e344a6b9898fb65e1a11",
"text": "Technology offers the potential to objectively monitor people's eating and activity behaviors and encourage healthier lifestyles. BALANCE is a mobile phone-based system for long term wellness management. The BALANCE system automatically detects the user's caloric expenditure via sensor data from a Mobile Sensing Platform unit worn on the hip. Users manually enter information on foods eaten via an interface on an N95 mobile phone. Initial validation experiments measuring oxygen consumption during treadmill walking and jogging show that the system's estimate of caloric output is within 87% of the actual value. Future work will refine and continue to evaluate the system's efficacy and develop more robust data input and activity inference methods.",
"title": ""
},
{
"docid": "670b58d379b7df273309e55cf8e25db4",
"text": "In this paper, we introduce a new large-scale dataset of ships, called SeaShips, which is designed for training and evaluating ship object detection algorithms. The dataset currently consists of 31 455 images and covers six common ship types (ore carrier, bulk cargo carrier, general cargo ship, container ship, fishing boat, and passenger ship). All of the images are from about 10 080 real-world video segments, which are acquired by the monitoring cameras in a deployed coastline video surveillance system. They are carefully selected to mostly cover all possible imaging variations, for example, different scales, hull parts, illumination, viewpoints, backgrounds, and occlusions. All images are annotated with ship-type labels and high-precision bounding boxes. Based on the SeaShips dataset, we present the performance of three detectors as a baseline to do the following: 1) elementarily summarize the difficulties of the dataset for ship detection; 2) show detection results for researchers using the dataset; and 3) make a comparison to identify the strengths and weaknesses of the baseline algorithms. In practice, the SeaShips dataset would hopefully advance research and applications on ship detection.",
"title": ""
},
{
"docid": "a76ba02ef0f87a41cdff1a4046d4bba1",
"text": "This paper proposes two RF self-interference cancellation techniques. Their small form-factor enables full-duplex communication links for small-to-medium size portable devices and hence promotes the adoption of full-duplex in mass-market applications and next-generation standards, e.g. IEEE802.11 and 5G. Measured prototype implementations of an electrical balance duplexer and a dual-polarized antenna both achieve >50 dB self-interference suppression at RF, operating in the ISM band at 2.45GHz.",
"title": ""
},
{
"docid": "0be3de2b6f0dd5d3158cc7a98286d571",
"text": "The use of tablet PCs is spreading rapidly, and accordingly users browsing and inputting personal information in public spaces can often be seen by third parties. Unlike conventional mobile phones and notebook PCs equipped with distinct input devices (e.g., keyboards), tablet PCs have touchscreen keyboards for data input. Such integration of display and input device increases the potential for harm when the display is captured by malicious attackers. This paper presents the description of reconstructing tablet PC displays via measurement of electromagnetic (EM) emanation. In conventional studies, such EM display capture has been achieved by using non-portable setups. Those studies also assumed that a large amount of time was available in advance of capture to obtain the electrical parameters of the target display. In contrast, this paper demonstrates that such EM display capture is feasible in real time by a setup that fits in an attaché case. The screen image reconstruction is achieved by performing a prior course profiling and a complemental signal processing instead of the conventional fine parameter tuning. Such complemental processing can eliminate the differences of leakage parameters among individuals and therefore correct the distortions of images. The attack distance, 2 m, makes this method a practical threat to general tablet PCs in public places. This paper discusses possible attack scenarios based on the setup described above. In addition, we describe a mechanism of EM emanation from tablet PCs and a countermeasure against such EM display capture.",
"title": ""
},
{
"docid": "b0cba371bb9628ac96a9ae2bb228f5a9",
"text": "Graph-based recommendation approaches can model associations between users and items alongside additional contextual information. Recent studies demonstrated that representing features extracted from social media (SM) auxiliary data, like friendships, jointly with traditional users/items ratings in the graph, contribute to recommendation accuracy. In this work, we take a step further and propose an extended graph representation that includes socio-demographic and personal traits extracted from the content posted by the user on SM. Empirical results demonstrate that processing unstructured textual information collected from Twitter and representing it in structured form in the graph improves recommendation performance, especially in cold start conditions.",
"title": ""
},
{
"docid": "f5703292e4c722332dcd85b172a3d69e",
"text": "Since an ever-increasing part of the population makes use of social media in their day-to-day lives, social media data is being analysed in many different disciplines. The social media analytics process involves four distinct steps, data discovery, collection, preparation, and analysis. While there is a great deal of literature on the challenges and difficulties involving specific data analysis methods, there hardly exists research on the stages of data discovery, collection, and preparation. To address this gap, we conducted an extended and structured literature analysis through which we identified challenges addressed and solutions proposed. The literature search revealed that the volume of data was most often cited as a challenge by researchers. In contrast, other categories have received less attention. Based on the results of the literature search, we discuss the most important challenges for researchers and present potential solutions. The findings are used to extend an existing framework on social media analytics. The article provides benefits for researchers and practitioners who wish to collect and analyse social media data.",
"title": ""
},
{
"docid": "4ae0bb75493e5d430037ba03fcff4054",
"text": "David Moher is at the Ottawa Methods Centre, Ottawa Hospital Research Institute, and the Department of Epidemiology and Community Medicine, Faculty of Medicine, University of Ottawa, Ottawa, Ontario, Canada. Alessandro Liberati is at the Università di Modena e Reggio Emilia, Modena, and the Centro Cochrane Italiano, Istituto Ricerche Farmacologiche Mario Negri, Milan, Italy. Jennifer Tetzlaff is at the Ottawa Methods Centre, Ottawa Hospital Research Institute, Ottawa, Ontario. Douglas G Altman is at the Centre for Statistics in Medicine, University of Oxford, Oxford, United Kingdom. Membership of the PRISMA Group is provided in the Acknowledgements.",
"title": ""
},
{
"docid": "9a5ef746c96a82311e3ebe8a3476a5f4",
"text": "A magnetic-tip steerable needle is presented with application to aiding deep brain stimulation electrode placement. The magnetic needle is 1.3mm in diameter at the tip with a 0.7mm diameter shaft, which is selected to match the size of a deep brain stimulation electrode. The tip orientation is controlled by applying torques to the embedded neodymium-iron-boron permanent magnets with a clinically-sized magnetic-manipulation system. The prototype design is capable of following trajectories under human-in-the-loop control with minimum bend radii of 100mm without inducing tissue damage and down to 30mm if some tissue damage is tolerable. The device can be retracted and redirected to reach multiple targets with a single insertion point.",
"title": ""
},
{
"docid": "3d8cd89ae0b69ff4820f253aec3dbbeb",
"text": "The importance of information as a resource for economic growth and education is steadily increasing. Due to technological advances in computer industry and the explosive growth of the Internet much valuable information will be available in digital libraries. This paper introduces a system that aims to support a user's browsing activities in document sets retrieved from a digital library. Latent Semantic Analysis is applied to extract salient semantic structures and citation patterns of documents stored in a digital library in a computationally expensive batch job. At retrieval time, cluster techniques are used to organize retrieved documents into clusters according to the previously extracted semantic similarities. A modified Boltzman algorithm [1] is employed to spatially organize the resulting clusters and their documents in the form of a three-dimensional information landscape or \"i-scape\". The i-scape is then displayed for interactive exploration via a multi-modal, virtual reality CAVE interface [8]. Users' browsing activities are recorded and user models are extracted to give newcomers online help based on previous navigation activity as well as to enable experienced users to recognize and exploit past user traces. In this way, the system provides interactive services to assist users in the spatial navigation, interpretation, and detailed exploration of potentially large document sets matching a query.",
"title": ""
},
{
"docid": "2cd5075ed124f933fe56fe1dd566df22",
"text": "We introduce MIDI-VAE, a neural network model based on Variational Autoencoders that is capable of handling polyphonic music with multiple instrument tracks, as well as modeling the dynamics of music by incorporating note durations and velocities. We show that MIDI-VAE can perform style transfer on symbolic music by automatically changing pitches, dynamics and instruments of a music piece from, e.g., a Classical to a Jazz style. We evaluate the efficacy of the style transfer by training separate style validation classifiers. Our model can also interpolate between short pieces of music, produce medleys and create mixtures of entire songs. The interpolations smoothly change pitches, dynamics and instrumentation to create a harmonic bridge between two music pieces. To the best of our knowledge, this work represents the first successful attempt at applying neural style transfer to complete musical compositions.",
"title": ""
},
{
"docid": "c536e79078d7d5778895e5ac7f02c95e",
"text": "Block-based programming languages like Scratch, Alice and Blockly are becoming increasingly common as introductory languages in programming education. There is substantial research showing that these visual programming environments are suitable for teaching programming concepts. But, what do people do when they use Scratch? In this paper we explore the characteristics of Scratch programs. To this end we have scraped the Scratch public repository and retrieved 250,000 projects. We present an analysis of these projects in three different dimensions. Initially, we look at the types of blocks used and the size of the projects. We then investigate complexity, used abstractions and programming concepts. Finally we detect code smells such as large scripts, dead code and duplicated code blocks. Our results show that 1) most Scratch programs are small, however Scratch programs consisting of over 100 sprites exist, 2) programming abstraction concepts like procedures are not commonly used and 3) Scratch programs do suffer from code smells including large scripts and unmatched broadcast signals.",
"title": ""
},
{
"docid": "b5d22d191745e4b94c6b7784b52c8ed8",
"text": "One of the biggest problems of SMEs is their tendencies to financial distress because of insufficient finance background. In this study, an early warning system (EWS) model based on data mining for financial risk detection is presented. CHAID algorithm has been used for development of the EWS. Developed EWS can be served like a tailor made financial advisor in decision making process of the firms with its automated nature to the ones who have inadequate financial background. Besides, an application of the model implemented which covered 7853 SMEs based on Turkish Central Bank (TCB) 2007 data. By using EWS model, 31 risk profiles, 15 risk indicators, 2 early warning signals, and 4 financial road maps has been determined for financial risk mitigation. 2011 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "4ad261905326b55a40569ebbc549a67c",
"text": "OBJECTIVES\nTo analyze the Spanish experience in an international study which evaluated tocilizumab in patients with rheumatoid arthritis (RA) and an inadequate response to conventional disease-modifying antirheumatic drugs (DMARDs) or tumor necrosis factor inhibitors (TNFis) in a clinical practice setting.\n\n\nMATERIAL AND METHODS\nSubanalysis of 170 patients with RA from Spain who participated in a phase IIIb, open-label, international clinical trial. Patients presented inadequate response to DMARDs or TNFis. They received 8mg/kg of tocilizumab every 4 weeks in combination with a DMARD or as monotherapy during 20 weeks. Safety and efficacy of tocilizumab were analyzed. Special emphasis was placed on differences between failure to a DMARD or to a TNFi and the need to switch to tocilizumab with or without a washout period in patients who had previously received TNFi.\n\n\nRESULTS\nThe most common adverse events were infections (25%), increased total cholesterol (38%) and transaminases (15%). Five patients discontinued the study due to an adverse event. After six months of tocilizumab treatment, 71/50/30% of patients had ACR 20/50/70 responses, respectively. A higher proportion of TNFi-naive patients presented an ACR20 response: 76% compared to 64% in the TNFi group with previous washout and 66% in the TNFi group without previous washout.\n\n\nCONCLUSIONS\nSafety results were consistent with previous results in patients with RA and an inadequate response to DMARDs or TNFis. Tocilizumab is more effective in patients who did not respond to conventional DMARDs than in patients who did not respond to TNFis.",
"title": ""
},
{
"docid": "a87c60deb820064abaa9093398937ff3",
"text": "Cardiac arrhythmia is one of the most important indicators of heart disease. Premature ventricular contractions (PVCs) are a common form of cardiac arrhythmia caused by ectopic heartbeats. The detection of PVCs by means of ECG (electrocardiogram) signals is important for the prediction of possible heart failure. This study focuses on the classification of PVC heartbeats from ECG signals and, in particular, on the performance evaluation of selected features using genetic algorithms (GA) to the classification of PVC arrhythmia. The objective of this study is to apply GA as a feature selection method to select the best feature subset from 200 time series features and to integrate these best features to recognize PVC forms. Neural networks, support vector machines and k-nearest neighbour classification algorithms were used. Findings were expressed in terms of accuracy, sensitivity, and specificity for the MIT-BIH Arrhythmia Database. The results showed that the proposed model achieved higher accuracy rates than those of other works on this topic.",
"title": ""
},
{
"docid": "5ea912d602b0107ae9833292da22b800",
"text": "We propose a novel and flexible anchor mechanism named MetaAnchor for object detection frameworks. Unlike many previous detectors model anchors via a predefined manner, in MetaAnchor anchor functions could be dynamically generated from the arbitrary customized prior boxes. Taking advantage of weight prediction, MetaAnchor is able to work with most of the anchor-based object detection systems such as RetinaNet. Compared with the predefined anchor scheme, we empirically find that MetaAnchor is more robust to anchor settings and bounding box distributions; in addition, it also shows the potential on transfer tasks. Our experiment on COCO detection task shows that MetaAnchor consistently outperforms the counterparts in various scenarios.",
"title": ""
},
{
"docid": "866b95a50dede975eeff9aeec91a610b",
"text": "In this paper, we focus on differential privacy preserving spectral graph analysis. Spectral graph analysis deals with the analysis of the spectra (eigenvalues and eigenvector components) of the graph’s adjacency matrix or its variants. We develop two approaches to computing the ε-differential eigen decomposition of the graph’s adjacency matrix. The first approach, denoted as LNPP, is based on the Laplace Mechanism that calibrates Laplace noise on the eigenvalues and every entry of the eigenvectors based on their sensitivities. We derive the global sensitivities of both eigenvalues and eigenvectors based on the matrix perturbation theory. Because the output eigenvectors after perturbation are no longer orthogonormal, we postprocess the output eigenvectors by using the state-of-the-art vector orthogonalization technique. The second approach, denoted as SBMF, is based on the exponential mechanism and the properties of the matrix Bingham-von Mises-Fisher density for network data spectral analysis. We prove that the sampling procedure achieves differential privacy. We conduct empirical evaluation on a real social network data and compare the two approaches in terms of utility preservation (the accuracy of spectra and the accuracy of low rank approximation) under the same differential privacy threshold. Our empirical evaluation results show that LNPP generally incurs smaller utility loss.",
"title": ""
},
{
"docid": "a7317f3f1b4767f20c38394e519fa0d8",
"text": "The development of the concept of burden for use in research lacks consistent conceptualization and operational definitions. The purpose of this article is to analyze the concept of burden in an effort to promote conceptual clarity. The technique advocated by Walker and Avant is used to analyze this concept. Critical attributes of burden include subjective perception, multidimensional phenomena, dynamic change, and overload. Predisposing factors are caregiver's characteristics, the demands of caregivers, and the involvement in caregiving. The consequences of burden generate problems in care-receiver, caregiver, family, and health care system. Overall, this article enables us to advance this concept, identify the different sources of burden, and provide directions for nursing intervention.",
"title": ""
}
] | scidocsrr |
7132da6719f38ed0f24d9f77e3be4f0c | Wide-area scene mapping for mobile visual tracking | [
{
"docid": "3982c66e695fdefe36d8d143247add88",
"text": "A recognition scheme that scales efficiently to a large number of objects is presented. The efficiency and quality is exhibited in a live demonstration that recognizes CD-covers from a database of 40000 images of popular music CD’s. The scheme builds upon popular techniques of indexing descriptors extracted from local regions, and is robust to background clutter and occlusion. The local region descriptors are hierarchically quantized in a vocabulary tree. The vocabulary tree allows a larger and more discriminatory vocabulary to be used efficiently, which we show experimentally leads to a dramatic improvement in retrieval quality. The most significant property of the scheme is that the tree directly defines the quantization. The quantization and the indexing are therefore fully integrated, essentially being one and the same. The recognition quality is evaluated through retrieval on a database with ground truth, showing the power of the vocabulary tree approach, going as high as 1 million images.",
"title": ""
}
] | [
{
"docid": "5ab1d4704e0f6c03fa96b6d530fcc6f8",
"text": "The deep convolutional neural networks have achieved significant improvements in accuracy and speed for single image super-resolution. However, as the depth of network grows, the information flow is weakened and the training becomes harder and harder. On the other hand, most of the models adopt a single-stream structure with which integrating complementary contextual information under different receptive fields is difficult. To improve information flow and to capture sufficient knowledge for reconstructing the high-frequency details, we propose a cascaded multi-scale cross network (CMSC) in which a sequence of subnetworks is cascaded to infer high resolution features in a coarse-to-fine manner. In each cascaded subnetwork, we stack multiple multi-scale cross (MSC) modules to fuse complementary multi-scale information in an efficient way as well as to improve information flow across the layers. Meanwhile, by introducing residual-features learning in each stage, the relative information between high-resolution and low-resolution features is fully utilized to further boost reconstruction performance. We train the proposed network with cascaded-supervision and then assemble the intermediate predictions of the cascade to achieve high quality image reconstruction. Extensive quantitative and qualitative evaluations on benchmark datasets illustrate the superiority of our proposed method over state-of-the-art superresolution methods.",
"title": ""
},
{
"docid": "9a6fbf264e212e6ee6b2d663042542f0",
"text": "Detailed 3D visual models of indoor spaces, from walls and floors to objects and their configurations, can provide extensive knowledge about the environments as well as rich contextual information of people living therein. Vision-based 3D modeling has only seen limited success in applications, as it faces many technical challenges that only a few experts understand, let alone solve. In this work we utilize (Kinect style) consumer depth cameras to enable non-expert users to scan their personal spaces into 3D models. We build a prototype mobile system for 3D modeling that runs in real-time on a laptop, assisting and interacting with the user on-the-fly. Color and depth are jointly used to achieve robust 3D registration. The system offers online feedback and hints, tolerates human errors and alignment failures, and helps to obtain complete scene coverage. We show that our prototype system can both scan large environments (50 meters across) and at the same time preserve fine details (centimeter accuracy). The capability of detailed 3D modeling leads to many promising applications such as accurate 3D localization, measuring dimensions, and interactive visualization.",
"title": ""
},
{
"docid": "c992c686e7e1b49127f6444a6adfa11e",
"text": "Published version ATTWOOD, F. (2005). What do people do with porn? qualitative research into the consumption, use and experience of pornography and other sexually explicit media. Sexuality and culture, 9 (2), 65-86. one copy of any article(s) in SHURA to facilitate their private study or for non-commercial research. You may not engage in further distribution of the material or use it for any profit-making activities or any commercial gain.",
"title": ""
},
{
"docid": "8333e369a5146156355ce83aaa965e71",
"text": "Software simulation tools supporting a teaching process are highly accepted by both teachers and students. We discuss the possibility of using automata simulators in theoretical computer science courses. The main purpose of this article is to propose key features and requirements of well designed automata simulator and to present our tool SimStudio -- integrated simulator of finite automaton, pushdown automaton, Turing machine, RAM with extension and abacus machine. The aim of this paper is to report our experiences with using of our automata simulators in teaching of the course \"Fundamentals of Theoretical Computer Science\" in bachelor educational program in software engineering held at Faculty of Informatics and Information Technologies, Slovak University of Technology in Bratislava.",
"title": ""
},
{
"docid": "1eba4ab4cb228a476987a5d1b32dda6c",
"text": "Optimistic estimates suggest that only 30-70% of waste generated in cities of developing countries is collected for disposal. As a result, uncollected waste is often disposed of into open dumps, along the streets or into water bodies. Quite often, this practice induces environmental degradation and public health risks. Notwithstanding, such practices also make waste materials readily available for itinerant waste pickers. These 'scavengers' as they are called, therefore perceive waste as a resource, for income generation. Literature suggests that Informal Sector Recycling (ISR) activity can bring other benefits such as, economic growth, litter control and resources conservation. This paper critically reviews trends in ISR activities in selected developing and transition countries. ISR often survives in very hostile social and physical environments largely because of negative Government and public attitude. Rather than being stigmatised, the sector should be recognised as an important element for achievement of sustainable waste management in developing countries. One solution to this problem could be the integration of ISR into the formal waste management system. To achieve ISR integration, this paper highlights six crucial aspects from literature: social acceptance, political will, mobilisation of cooperatives, partnerships with private enterprises, management and technical skills, as well as legal protection measures. It is important to note that not every country will have the wherewithal to achieve social inclusion and so the level of integration must be 'flexible'. In addition, the structure of the ISR should not be based on a 'universal' model but should instead take into account local contexts and conditions.",
"title": ""
},
{
"docid": "b93455e6b023910bf7711d56d16f62a2",
"text": "Learning low-dimensional embeddings of knowledge graphs is a powerful approach used to predict unobserved or missing edges between entities. However, an open challenge in this area is developing techniques that can go beyond simple edge prediction and handle more complex logical queries, which might involve multiple unobserved edges, entities, and variables. For instance, given an incomplete biological knowledge graph, we might want to predict what drugs are likely to target proteins involved with both diseases X and Y?—a query that requires reasoning about all possible proteins that might interact with diseases X and Y. Here we introduce a framework to efficiently make predictions about conjunctive logical queries—a flexible but tractable subset of first-order logic—on incomplete knowledge graphs. In our approach, we embed graph nodes in a low-dimensional space and represent logical operators as learned geometric operations (e.g., translation, rotation) in this embedding space. By performing logical operations within a low-dimensional embedding space, our approach achieves a time complexity that is linear in the number of query variables, compared to the exponential complexity required by a naive enumeration-based approach. We demonstrate the utility of this framework in two application studies on real-world datasets with millions of relations: predicting logical relationships in a network of drug-gene-disease interactions and in a graph-based representation of social interactions derived from a popular web forum.",
"title": ""
},
{
"docid": "1f73e9b3a6f669fcd7f9610aae5b0ee9",
"text": "The P value is a measure of statistical evidence that appears in virtually all medical research papers. Its interpretation is made extraordinarily difficult because it is not part of any formal system of statistical inference. As a result, the P value's inferential meaning is widely and often wildly misconstrued, a fact that has been pointed out in innumerable papers and books appearing since at least the 1940s. This commentary reviews a dozen of these common misinterpretations and explains why each is wrong. It also reviews the possible consequences of these improper understandings or representations of its meaning. Finally, it contrasts the P value with its Bayesian counterpart, the Bayes' factor, which has virtually all of the desirable properties of an evidential measure that the P value lacks, most notably interpretability. The most serious consequence of this array of P-value misconceptions is the false belief that the probability of a conclusion being in error can be calculated from the data in a single experiment without reference to external evidence or the plausibility of the underlying mechanism.",
"title": ""
},
{
"docid": "472fa4ac09577955b2bc7f0674c37dfe",
"text": "BACKGROUND\n47 XXY/46 XX mosaicism with characteristics suggesting Klinefelter syndrome is very rare and at present, only seven cases have been reported in the literature.\n\n\nCASE PRESENTATION\nWe report an Indian boy diagnosed as variant of Klinefelter syndrome with 47 XXY/46 XX mosaicism at age 12 years. He was noted to have right cryptorchidism and chordae at birth, but did not have surgery for these until age 3 years. During surgery, the right gonad was atrophic and removed. Histology revealed atrophic ovarian tissue. Pelvic ultrasound showed no Mullerian structures. There was however no clinical follow up and he was raised as a boy. At 12 years old he was re-evaluated because of parental concern about his 'female' body habitus. He was slightly overweight, had eunuchoid body habitus with mild gynaecomastia. The right scrotal sac was empty and a 2mls testis was present in the left scrotum. Penile length was 5.2 cm and width 2.0 cm. There was absent pubic or axillary hair. Pronation and supination of his upper limbs were reduced and x-ray of both elbow joints revealed bilateral radioulnar synostosis. The baseline laboratory data were LH < 0.1 mIU/ml, FSH 1.4 mIU/ml, testosterone 0.6 nmol/L with raised estradiol, 96 pmol/L. HCG stimulation test showed poor Leydig cell response. The karyotype based on 76 cells was 47 XXY[9]/46 XX[67] with SRY positive. Laparoscopic examination revealed no Mullerian structures.\n\n\nCONCLUSION\nInsisting on an adequate number of cells (at least 50) to be examined during karyotyping is important so as not to miss diagnosing mosaicism.",
"title": ""
},
{
"docid": "5ecf501c24bb7fcb4bbc8fccf5715206",
"text": "The rise of blockchain technologies has given a boost to social good projects, which are trying to exploit various characteristic features of blockchains: the quick and inexpensive transfer of cryptocurrency, the transparency of transactions, the ability to tokenize any kind of assets, and the increase in trustworthiness due to decentralization. However, the swift pace of innovation in blockchain technologies, and the hype that has surrounded their \"disruptive potential\", make it difficult to understand whether these technologies are applied correctly, and what one should expect when trying to apply them to social good projects. This paper addresses these issues, by systematically analysing a collection of 120 blockchain-enabled social good projects. Focussing on measurable and objective aspects, we try to answer various relevant questions: which features of blockchains are most commonly used? Do projects have success in fund raising? Are they making appropriate choices on the blockchain architecture? How many projects are released to the public, and how many are eventually abandoned?",
"title": ""
},
{
"docid": "4cbec8031ea32380675b1d8dff107cab",
"text": "Quorum-sensing bacteria communicate with extracellular signal molecules called autoinducers. This process allows community-wide synchronization of gene expression. A screen for additional components of the Vibrio harveyi and Vibrio cholerae quorum-sensing circuits revealed the protein Hfq. Hfq mediates interactions between small, regulatory RNAs (sRNAs) and specific messenger RNA (mRNA) targets. These interactions typically alter the stability of the target transcripts. We show that Hfq mediates the destabilization of the mRNA encoding the quorum-sensing master regulators LuxR (V. harveyi) and HapR (V. cholerae), implicating an sRNA in the circuit. Using a bioinformatics approach to identify putative sRNAs, we identified four candidate sRNAs in V. cholerae. The simultaneous deletion of all four sRNAs is required to stabilize hapR mRNA. We propose that Hfq, together with these sRNAs, creates an ultrasensitive regulatory switch that controls the critical transition into the high cell density, quorum-sensing mode.",
"title": ""
},
{
"docid": "92c3738d8873eb223a5a478cc76c95b0",
"text": "Visual target tracking is one of the major fields in computer vision system. Object tracking has many practical applications such as automated surveillance system, military guidance, traffic management system, fault detection system, artificial intelligence and robot vision system. But it is difficult to track objects with image sensor. Especially, multiple objects tracking is harder than single object tracking. This paper proposes multiple objects tracking algorithm based on the Kalman filter. Our algorithm uses the Kalman filter as many as the number of moving objects in the image frame. If many moving objects exist in the image, however, we obtain multiple measurements. Therefore, precise data association is necessary in order to track multiple objects correctly. Another problem of multiple objects tracking is occlusion that causes merge and split. For solving these problems, this paper defines the cost function using some factors. Experiments using Matlab show that the performance of the proposed algorithm is appropriate for multiple objects tracking in real-time.",
"title": ""
},
{
"docid": "050dd71858325edd4c1a42fc1a25de95",
"text": "This paper presents Disco, a prototype for supporting knowledge workers in exploring, reviewing and sorting collections of textual data. The goal is to facilitate, accelerate and improve the discovery of information. To this end, it combines Semantic Relatedness techniques with a review workflow developed in a tangible environment. Disco uses a semantic model that is leveraged on-line in the course of search sessions, and accessed through natural hand-gesture, in a simple and intuitive way.",
"title": ""
},
{
"docid": "c4d2748fbab63fb3ab320f4d2c0fd18b",
"text": "In human fingertips, the fingerprint patterns and interlocked epidermal-dermal microridges play a critical role in amplifying and transferring tactile signals to various mechanoreceptors, enabling spatiotemporal perception of various static and dynamic tactile signals. Inspired by the structure and functions of the human fingertip, we fabricated fingerprint-like patterns and interlocked microstructures in ferroelectric films, which can enhance the piezoelectric, pyroelectric, and piezoresistive sensing of static and dynamic mechanothermal signals. Our flexible and microstructured ferroelectric skins can detect and discriminate between multiple spatiotemporal tactile stimuli including static and dynamic pressure, vibration, and temperature with high sensitivities. As proof-of-concept demonstration, the sensors have been used for the simultaneous monitoring of pulse pressure and temperature of artery vessels, precise detection of acoustic sounds, and discrimination of various surface textures. Our microstructured ferroelectric skins may find applications in robotic skins, wearable sensors, and medical diagnostic devices.",
"title": ""
},
{
"docid": "4419d61684dff89f4678afe3b8dc06e0",
"text": "Reason and emotion have long been considered opposing forces. However, recent psychological and neuroscientific research has revealed that emotion and cognition are closely intertwined. Cognitive processing is needed to elicit emotional responses. At the same time, emotional responses modulate and guide cognition to enable adaptive responses to the environment. Emotion determines how we perceive our world, organise our memory, and make important decisions. In this review, we provide an overview of current theorising and research in the Affective Sciences. We describe how psychological theories of emotion conceptualise the interactions of cognitive and emotional processes. We then review recent research investigating how emotion impacts our perception, attention, memory, and decision-making. Drawing on studies with both healthy participants and clinical populations, we illustrate the mechanisms and neural substrates underlying the interactions of cognition and emotion.",
"title": ""
},
{
"docid": "d1e43c347f708547aefa07b3c83ee428",
"text": "Studies using Nomura et al.’s “Negative Attitude toward Robots Scale” (NARS) [1] as an attitudinal measure have featured robots that were perceived to be autonomous, independent agents. State of the art telepresence robots require an explicit human-in-the-loop to drive the robot around. In this paper, we investigate if NARS can be used with telepresence robots. To this end, we conducted three studies in which people watched videos of telepresence robots (n=70), operated telepresence robots (n=38), and interacted with telepresence robots (n=12). Overall, the results from our three studies indicated that NARS may be applied to telepresence robots, and culture, gender, and prior robot experience can be influential factors on the NARS score.",
"title": ""
},
{
"docid": "79b3ed4c5e733c73b5e7ebfdf6069293",
"text": "This paper addresses the problem of simultaneous 3D reconstruction and material recognition and segmentation. Enabling robots to recognise different materials (concrete, metal etc.) in a scene is important for many tasks, e.g. robotic interventions in nuclear decommissioning. Previous work on 3D semantic reconstruction has predominantly focused on recognition of everyday domestic objects (tables, chairs etc.), whereas previous work on material recognition has largely been confined to single 2D images without any 3D reconstruction. Meanwhile, most 3D semantic reconstruction methods rely on computationally expensive post-processing, using Fully-Connected Conditional Random Fields (CRFs), to achieve consistent segmentations. In contrast, we propose a deep learning method which performs 3D reconstruction while simultaneously recognising different types of materials and labeling them at the pixel level. Unlike previous methods, we propose a fully end-to-end approach, which does not require hand-crafted features or CRF post-processing. Instead, we use only learned features, and the CRF segmentation constraints are incorporated inside the fully end-to-end learned system. We present the results of experiments, in which we trained our system to perform real-time 3D semantic reconstruction for 23 different materials in a real-world application. The run-time performance of the system can be boosted to around 10Hz, using a conventional GPU, which is enough to achieve realtime semantic reconstruction using a 30fps RGB-D camera. To the best of our knowledge, this work is the first real-time end-to-end system for simultaneous 3D reconstruction and material recognition.",
"title": ""
},
{
"docid": "9d04b10ebe8a65777aacf20fe37b55cb",
"text": "Over the past decade, Deep Artificial Neural Networks (DNNs) have become the state-of-the-art algorithms in Machine Learning (ML), speech recognition, computer vision, natural language processing and many other tasks. This was made possible by the advancement in Big Data, Deep Learning (DL) and drastically increased chip processing abilities, especially general-purpose graphical processing units (GPGPUs). All this has created a growing interest in making the most of the potential offered by DNNs in almost every field. An overview of the main architectures of DNNs, and their usefulness in Pharmacology and Bioinformatics are presented in this work. The featured applications are: drug design, virtual screening (VS), Quantitative Structure-Activity Relationship (QSAR) research, protein structure prediction and genomics (and other omics) data mining. The future need of neuromorphic hardware for DNNs is also discussed, and the two most advanced chips are reviewed: IBM TrueNorth and SpiNNaker. In addition, this review points out the importance of considering not only neurons, as DNNs and neuromorphic chips should also include glial cells, given the proven importance of astrocytes, a type of glial cell which contributes to information processing in the brain. The Deep Artificial Neuron-Astrocyte Networks (DANAN) could overcome the difficulties in architecture design, learning process and scalability of the current ML methods.",
"title": ""
},
{
"docid": "5092b52243788c4f4e0c53e7556ed9de",
"text": "This work attempts to address two fundamental questions about the structure of the convolutional neural networks (CNN): 1) why a nonlinear activation function is essential at the filter output of all intermediate layers? 2) what is the advantage of the two-layer cascade system over the one-layer system? A mathematical model called the “REctified-COrrelations on a Sphere” (RECOS) is proposed to answer these two questions. After the CNN training process, the converged filter weights define a set of anchor vectors in the RECOS model. Anchor vectors represent the frequently occurring patterns (or the spectral components). The necessity of rectification is explained using the RECOS model. Then, the behavior of a two-layer RECOS system is analyzed and compared with its one-layer counterpart. The LeNet-5 and the MNIST dataset are used to illustrate discussion points. Finally, the RECOS model is generalized to a multilayer system with the AlexNet as an example.",
"title": ""
},
{
"docid": "e75f830b902ca7d0e8d9e9fa03a62440",
"text": "Changes in synaptic connections are considered essential for learning and memory formation. However, it is unknown how neural circuits undergo continuous synaptic changes during learning while maintaining lifelong memories. Here we show, by following postsynaptic dendritic spines over time in the mouse cortex, that learning and novel sensory experience lead to spine formation and elimination by a protracted process. The extent of spine remodelling correlates with behavioural improvement after learning, suggesting a crucial role of synaptic structural plasticity in memory formation. Importantly, a small fraction of new spines induced by novel experience, together with most spines formed early during development and surviving experience-dependent elimination, are preserved and provide a structural basis for memory retention throughout the entire life of an animal. These studies indicate that learning and daily sensory experience leave minute but permanent marks on cortical connections and suggest that lifelong memories are stored in largely stably connected synaptic networks.",
"title": ""
},
{
"docid": "6cf711826e5718507725ff6f887c7dbc",
"text": "Electronic Support Measures (ESM) system is an important function of electronic warfare which provides the real time projection of radar activities. Such systems may encounter with very high density pulse sequences and it is the main task of an ESM system to deinterleave these mixed pulse trains with high accuracy and minimum computation time. These systems heavily depend on time of arrival analysis and need efficient clustering algorithms to assist deinterleaving process in modern evolving environments. On the other hand, self organizing neural networks stand very promising for this type of radar pulse clustering. In this study, performances of self organizing neural networks that meet such clustering criteria are evaluated in detail and the results are presented.",
"title": ""
}
] | scidocsrr |
c9fb85c377ccc1eb4212759698900753 | Very High Frame Rate Volumetric Integration of Depth Images on Mobile Devices | [
{
"docid": "0e12ea5492b911c8879cc5e79463c9fa",
"text": "In this paper, we propose a complete on-device 3D reconstruction pipeline for mobile monocular hand-held devices, which generates dense 3D models with absolute scale on-site while simultaneously supplying the user with real-time interactive feedback. The method fills a gap in current cloud-based mobile reconstruction services as it ensures at capture time that the acquired image set fulfills desired quality and completeness criteria. In contrast to existing systems, the developed framework offers multiple innovative solutions. In particular, we investigate the usability of the available on-device inertial sensors to make the tracking and mapping process more resilient to rapid motions and to estimate the metric scale of the captured scene. Moreover, we propose an efficient and accurate scheme for dense stereo matching which allows to reduce the processing time to interactive speed. We demonstrate the performance of the reconstruction pipeline on multiple challenging indoor and outdoor scenes of different size and depth variability.",
"title": ""
},
{
"docid": "3f6382ed8f0e89be1a752689d54f0d06",
"text": "MonoFusion allows a user to build dense 3D reconstructions of their environment in real-time, utilizing only a single, off-the-shelf web camera as the input sensor. The camera could be one already available in a tablet, phone, or a standalone device. No additional input hardware is required. This removes the need for power intensive active sensors that do not work robustly in natural outdoor lighting. Using the input stream of the camera we first estimate the 6DoF camera pose using a sparse tracking method. These poses are then used for efficient dense stereo matching between the input frame and a key frame (extracted previously). The resulting dense depth maps are directly fused into a voxel-based implicit model (using a computationally inexpensive method) and surfaces are extracted per frame. The system is able to recover from tracking failures as well as filter out geometrically inconsistent noise from the 3D reconstruction. Our method is both simple to implement and efficient, making such systems even more accessible. This paper details the algorithmic components that make up our system and a GPU implementation of our approach. Qualitative results demonstrate high quality reconstructions even visually comparable to active depth sensor-based systems such as KinectFusion.",
"title": ""
},
{
"docid": "c8e5257c2ed0023dc10786a3071c6e6a",
"text": "Online 3D reconstruction is gaining newfound interest due to the availability of real-time consumer depth cameras. The basic problem takes live overlapping depth maps as input and incrementally fuses these into a single 3D model. This is challenging particularly when real-time performance is desired without trading quality or scale. We contribute an online system for large and fine scale volumetric reconstruction based on a memory and speed efficient data structure. Our system uses a simple spatial hashing scheme that compresses space, and allows for real-time access and updates of implicit surface data, without the need for a regular or hierarchical grid data structure. Surface data is only stored densely where measurements are observed. Additionally, data can be streamed efficiently in or out of the hash table, allowing for further scalability during sensor motion. We show interactive reconstructions of a variety of scenes, reconstructing both fine-grained details and large scale environments. We illustrate how all parts of our pipeline from depth map pre-processing, camera pose estimation, depth map fusion, and surface rendering are performed at real-time rates on commodity graphics hardware. We conclude with a comparison to current state-of-the-art online systems, illustrating improved performance and reconstruction quality.",
"title": ""
},
{
"docid": "bfd97b5576873345b0474a645ccda1d6",
"text": "We present a direct monocular visual odometry system which runs in real-time on a smartphone. Being a direct method, it tracks and maps on the images themselves instead of extracted features such as keypoints. New images are tracked using direct image alignment, while geometry is represented in the form of a semi-dense depth map. Depth is estimated by filtering over many small-baseline, pixel-wise stereo comparisons. This leads to significantly less outliers and allows to map and use all image regions with sufficient gradient, including edges. We show how a simple world model for AR applications can be derived from semi-dense depth maps, and demonstrate the practical applicability in the context of an AR application in which simulated objects can collide with real geometry.",
"title": ""
}
] | [
{
"docid": "a0ca7d86ae79c263644c8cd5ae4c0aed",
"text": "Research in texture recognition often concentrates on the problem of material recognition in uncluttered conditions, an assumption rarely met by applications. In this work we conduct a first study of material and describable texture attributes recognition in clutter, using a new dataset derived from the OpenSurface texture repository. Motivated by the challenge posed by this problem, we propose a new texture descriptor, FV-CNN, obtained by Fisher Vector pooling of a Convolutional Neural Network (CNN) filter bank. FV-CNN substantially improves the state-of-the-art in texture, material and scene recognition. Our approach achieves 79.8% accuracy on Flickr material dataset and 81% accuracy on MIT indoor scenes, providing absolute gains of more than 10% over existing approaches. FV-CNN easily transfers across domains without requiring feature adaptation as for methods that build on the fully-connected layers of CNNs. Furthermore, FV-CNN can seamlessly incorporate multi-scale information and describe regions of arbitrary shapes and sizes. Our approach is particularly suited at localizing “stuff” categories and obtains state-of-the-art results on MSRC segmentation dataset, as well as promising results on recognizing materials and surface attributes in clutter on the OpenSurfaces dataset.",
"title": ""
},
{
"docid": "9e3d3783aa566b50a0e56c71703da32b",
"text": "Heterogeneous networks are widely used to model real-world semi-structured data. The key challenge of learning over such networks is the modeling of node similarity under both network structures and contents. To deal with network structures, most existing works assume a given or enumerable set of meta-paths and then leverage them for the computation of meta-path-based proximities or network embeddings. However, expert knowledge for given meta-paths is not always available, and as the length of considered meta-paths increases, the number of possible paths grows exponentially, which makes the path searching process very costly. On the other hand, while there are often rich contents around network nodes, they have hardly been leveraged to further improve similarity modeling. In this work, to properly model node similarity in content-rich heterogeneous networks, we propose to automatically discover useful paths for pairs of nodes under both structural and content information. To this end, we combine continuous reinforcement learning and deep content embedding into a novel semi-supervised joint learning framework. Specifically, the supervised reinforcement learning component explores useful paths between a small set of example similar pairs of nodes, while the unsupervised deep embedding component captures node contents and enables inductive learning on the whole network. The two components are jointly trained in a closed loop to mutually enhance each other. Extensive experiments on three real-world heterogeneous networks demonstrate the supreme advantages of our algorithm.",
"title": ""
},
{
"docid": "4261e44dad03e8db3c0520126b9c7c4d",
"text": "One of the major drawbacks of magnetic resonance imaging (MRI) has been the lack of a standard and quantifiable interpretation of image intensities. Unlike in other modalities, such as X-ray computerized tomography, MR images taken for the same patient on the same scanner at different times may appear different from each other due to a variety of scanner-dependent variations and, therefore, the absolute intensity values do not have a fixed meaning. The authors have devised a two-step method wherein all images (independent of patients and the specific brand of the MR scanner used) can be transformed in such a may that for the same protocol and body region, in the transformed images similar intensities will have similar tissue meaning. Standardized images can be displayed with fixed windows without the need of per-case adjustment. More importantly, extraction of quantitative information about healthy organs or about abnormalities can be considerably simplified. This paper introduces and compares new variants of this standardizing method that can help to overcome some of the problems with the original method.",
"title": ""
},
{
"docid": "143a4fcc0f2949e797e6f51899e811e2",
"text": "A long-standing problem at the interface of artificial intelligence and applied mathematics is to devise an algorithm capable of achieving human level or even superhuman proficiency in transforming observed data into predictive mathematical models of the physical world. In the current era of abundance of data and advanced machine learning capabilities, the natural question arises: How can we automatically uncover the underlying laws of physics from high-dimensional data generated from experiments? In this work, we put forth a deep learning approach for discovering nonlinear partial differential equations from scattered and potentially noisy observations in space and time. Specifically, we approximate the unknown solution as well as the nonlinear dynamics by two deep neural networks. The first network acts as a prior on the unknown solution and essentially enables us to avoid numerical differentiations which are inherently ill-conditioned and unstable. The second network represents the nonlinear dynamics and helps us distill the mechanisms that govern the evolution of a given spatiotemporal data-set. We test the effectiveness of our approach for several benchmark problems spanning a number of scientific domains and demonstrate how the proposed framework can help us accurately learn the underlying dynamics and forecast future states of the system. In particular, we study the Burgers’, Kortewegde Vries (KdV), Kuramoto-Sivashinsky, nonlinear Schrödinger, and NavierStokes equations.",
"title": ""
},
{
"docid": "27d7f7935c235a3631fba6e3df08f623",
"text": "We investigate the task of Named Entity Recognition (NER) in the domain of biomedical text. There is little published work employing modern neural network techniques in this domain, probably due to the small sizes of human-labeled data sets, as non-trivial neural models would have great difficulty avoiding overfitting. In this work we follow a semi-supervised learning approach: We first train state-of-the art (deep) neural networks on a large corpus of noisy machine-labeled data, then “transfer” and fine-tune the learned model on two higher-quality humanlabeled data sets. This approach yields higher performance than the current best published systems for the class DISEASE. It trails but is not far from the currently best systems for the class CHEM.",
"title": ""
},
{
"docid": "3e605aff5b2ceae91ee0cef42dd36528",
"text": "A new super-concentrated aqueous electrolyte is proposed by introducing a second lithium salt. The resultant ultra-high concentration of 28 m led to more effective formation of a protective interphase on the anode along with further suppression of water activities at both anode and cathode surfaces. The improved electrochemical stability allows the use of TiO2 as the anode material, and a 2.5 V aqueous Li-ion cell based on LiMn2 O4 and carbon-coated TiO2 delivered the unprecedented energy density of 100 Wh kg(-1) for rechargeable aqueous Li-ion cells, along with excellent cycling stability and high coulombic efficiency. It has been demonstrated that the introduction of a second salts into the \"water-in-salt\" electrolyte further pushed the energy densities of aqueous Li-ion cells closer to those of the state-of-the-art Li-ion batteries.",
"title": ""
},
{
"docid": "406a8143edfeab7f97d451d0af9b7058",
"text": "One of the core questions when designing modern Natural Language Processing (NLP) systems is how to model input textual data such that the learning algorithm is provided with enough information to estimate accurate decision functions. The mainstream approach is to represent input objects as feature vectors where each value encodes some of their aspects, e.g., syntax, semantics, etc. Feature-based methods have demonstrated state-of-the-art results on various NLP tasks. However, designing good features is a highly empirical-driven process, it greatly depends on a task requiring a significant amount of domain expertise. Moreover, extracting features for complex NLP tasks often requires expensive pre-processing steps running a large number of linguistic tools while relying on external knowledge sources that are often not available or hard to get. Hence, this process is not cheap and often constitutes one of the major challenges when attempting a new task or adapting to a different language or domain. The problem of modelling input objects is even more acute in cases when the input examples are not just single objects but pairs of objects, such as in various learning to rank problems in Information Retrieval and Natural Language processing. An alternative to feature-based methods is using kernels which are essentially nonlinear functions mapping input examples into some high dimensional space thus allowing for learning decision functions with higher discriminative power. Kernels implicitly generate a very large number of features computing similarity between input examples in that implicit space. A well-designed kernel function can greatly reduce the effort to design a large set of manually designed features often leading to superior results. However, in the recent years, the use of kernel methods in NLP has been greatly underestimated primarily due to the following reasons: (i) learning with kernels is slow as it requires to carry out optimizaiton in the dual space leading to quadratic complexity; (ii) applying kernels to the input objects encoded with vanilla structures, e.g., generated by syntactic parsers, often yields minor improvements over carefully designed feature-based methods. In this thesis, we adopt the kernel learning approach for solving complex NLP tasks and primarily focus on solutions to the aforementioned problems posed by the use of kernels. In particular, we design novel learning algorithms for training Support Vector Machines with structural kernels, e.g., tree kernels, considerably speeding up the training over the conventional SVM training methods. We show that using the training algorithms developed in this thesis allows for trainining tree kernel models on large-scale datasets containing millions of instances, which was not possible before. Next, we focus on the problem of designing input structures that are fed to tree kernel functions to automatically generate a large set of tree-fragment features. We demonstrate that previously used plain structures generated by syntactic parsers, e.g., syntactic or dependency trees, are often a poor choice thus compromising the expressivity offered by a tree kernel learning framework. We propose several effective design patterns of the input tree structures for various NLP tasks ranging from sentiment analysis to answer passage reranking. The central idea is to inject additional semantic information relevant for the task directly into the tree nodes and let the expressive kernels generate rich feature spaces. For the opinion mining tasks, the additional semantic information injected into tree nodes can be word polarity labels, while for more complex tasks of modelling text pairs the relational information about overlapping words in a pair appears to significantly improve the accuracy of the resulting models. Finally, we observe that both feature-based and kernel methods typically treat words as atomic units where matching different yet semantically similar words is problematic. Conversely, the idea of distributional approaches to model words as vectors is much more effective in establishing a semantic match between words and phrases. While tree kernel functions do allow for a more flexible matching between phrases and sentences through matching their syntactic contexts, their representation can not be tuned on the training set as it is possible with distributional approaches. Recently, deep learning approaches have been applied to generalize the distributional word matching problem to matching sentences taking it one step further by learning the optimal sentence representations for a given task. Deep neural networks have already claimed state-of-the-art performance in many computer vision, speech recognition, and natural language tasks. Following this trend, this thesis also explores the virtue of deep learning architectures for modelling input texts and text pairs where we build on some of the ideas to model input objects proposed within the tree kernel learning framework. In particular, we explore the idea of relational linking (proposed in the preceding chapters to encode text pairs using linguistic tree structures) to design a state-of-the-art deep learning architecture for modelling text pairs. We compare the proposed deep learning models that require even less manual intervention in the feature design process then previously described tree kernel methods that already offer a very good trade-off between the feature-engineering effort and the expressivity of the resulting representation. Our deep learning models demonstrate the state-of-the-art performance on a recent benchmark for Twitter Sentiment Analysis, Answer Sentence Selection and Microblog retrieval.",
"title": ""
},
{
"docid": "f5bea5413ad33191278d7630a7e18e39",
"text": "Speech activity detection (SAD) on channel transmissions is a critical preprocessing task for speech, speaker and language recognition or for further human analysis. This paper presents a feature combination approach to improve SAD on highly channel degraded speech as part of the Defense Advanced Research Projects Agency’s (DARPA) Robust Automatic Transcription of Speech (RATS) program. The key contribution is the feature combination exploration of different novel SAD features based on pitch and spectro-temporal processing and the standard Mel Frequency Cepstral Coefficients (MFCC) acoustic feature. The SAD features are: (1) a GABOR feature representation, followed by a multilayer perceptron (MLP); (2) a feature that combines multiple voicing features and spectral flux measures (Combo); (3) a feature based on subband autocorrelation (SAcC) and MLP postprocessing and (4) a multiband comb-filter F0 (MBCombF0) voicing measure. We present single, pairwise and all feature combinations, show high error reductions from pairwise feature level combination over the MFCC baseline and show that the best performance is achieved by the combination of all features.",
"title": ""
},
{
"docid": "a299b0f58aaba6efff9361ff2b5a1e69",
"text": "The continuing growth of World Wide Web and on-line text collections makes a large volume of information available to users. Automatic text summarization allows users to quickly understand documents. In this paper, we propose an automated technique for single document summarization which combines content-based and graph-based approaches and introduce the Hopfield network algorithm as a technique for ranking text segments. A series of experiments are performed using the DUC collection and a Thai-document collection. The results show the superiority of the proposed technique over reference systems, in addition the Hopfield network algorithm on undirected graph is shown to be the best text segment ranking algorithm in the study",
"title": ""
},
{
"docid": "41c3505d1341247972d99319cba3e7ba",
"text": "A 32-year-old pregnant woman in the 25th week of pregnancy underwent oral glucose tolerance screening at the diabetologist's. Later that day, she was found dead in her apartment possibly poisoned with Chlumsky disinfectant solution (solutio phenoli camphorata). An autopsy revealed chemical burns in the digestive system. The lungs and the brain showed signs of severe edema. The blood of the woman and fetus was analyzed using gas chromatography with mass spectrometry and revealed phenol, its metabolites (phenyl glucuronide and phenyl sulfate) and camphor. No ethanol was found in the blood samples. Both phenol and camphor are contained in Chlumsky disinfectant solution, which is used for disinfecting surgical equipment in healthcare facilities. Further investigation revealed that the deceased woman had been accidentally administered a disinfectant instead of a glucose solution by the nurse, which resulted in acute intoxication followed by the death of the pregnant woman and the fetus.",
"title": ""
},
{
"docid": "00ed940459b92d92981e4132a2b5e9c0",
"text": "Variants of Hirschsprung disease are conditions that clinically resemble Hirschsprung disease, despite the presence of ganglion cells in rectal suction biopsies. The characterization and differentiation of various entities are mainly based on histologic, immunohistochemical, and electron microscopy findings of biopsies from patients with functional intestinal obstruction. Intestinal neuronal dysplasia is histologically characterized by hyperganglionosis, giant ganglia, and ectopic ganglion cells. In most intestinal neuronal dysplasia cases, conservative treatments such as laxatives and enema are sufficient. Some patients may require internal sphincter myectomy. Patients with the diagnosis of isolated hypoganglionosis show decreased numbers of nerve cells, decreased plexus area, as well as increased distance between ganglia in rectal biopsies, and resection of the affected segment has been the treatment of choice. The diagnosis of internal anal sphincter achalasia is based on abnormal rectal manometry findings, whereas rectal suction biopsies display presence of ganglion cells as well as normal acetylcholinesterase activity. Internal anal sphincter achalasia is either treated by internal sphincter myectomy or botulinum toxin injection. Megacystis microcolon intestinal hypoperistalsis is a rare condition, and the most severe form of functional intestinal obstruction in the newborn. Megacystis microcolon intestinal hypoperistalsis is characterized by massive abdominal distension caused by a largely dilated nonobstructed bladder, microcolon, and decreased or absent intestinal peristalsis. Although the outcome has improved in recent years, survivors have to be either maintained by total parenteral nutrition or have undergone multivisceral transplant. This review article summarizes the current knowledge of the aforementioned entities of variant HD.",
"title": ""
},
{
"docid": "aa4bad972cb53de2e60fd998df08d774",
"text": "170 undergraduate students completed the Boredom Proneness Scale by Farmer and Sundberg and the Multiple Affect Adjective Checklist by Zuckerman and Lubin. Significant negative relationships were found between boredom proneness and negative affect scores (i.e., Depression, Hostility, Anxiety). Significant positive correlations also obtained between boredom proneness and positive affect (i.e., Positive Affect, Sensation Seeking). The correlations between boredom proneness \"subscales\" and positive and negative affect were congruent with those obtained using total boredom proneness scores. Implications for counseling are discussed.",
"title": ""
},
{
"docid": "7a54331811a4a93df69365b6756e1d5f",
"text": "With object storage services becoming increasingly accepted as replacements for traditional file or block systems, it is important to effectively measure the performance of these services. Thus people can compare different solutions or tune their systems for better performance. However, little has been reported on this specific topic as yet. To address this problem, we present COSBench (Cloud Object Storage Benchmark), a benchmark tool that we are currently working on in Intel for cloud object storage services. In addition, in this paper, we also share the results of the experiments we have performed so far.",
"title": ""
},
{
"docid": "1aef8b7e5b4e3237b3d6703c15baa990",
"text": "This paper demonstrates six-metal-layer antenna-to-receiver signal transitions on panel-scale processed ultra-thin glass-based 5G module substrates with 50-Ω transmission lines and micro-via transitions in re-distribution layers. The glass modules consist of low-loss dielectric thin-films laminated on 100-μm glass cores. Modeling, design, fabrication, and characterization of the multilayered signal interconnects were performed at 28-GHz band. The surface planarity and dimensional stability of glass substrates enabled the fabrication of highly-controlled signal traces with tolerances of 2% inside the re-distribution layers on low-loss dielectric build-up thin-films. The fabricated transmission lines showed 0.435 dB loss with 4.19 mm length, while microvias in low-loss dielectric thin-films showed 0.034 dB/microvia. The superiority of glass substrates enable low-loss link budget with high precision from chip to antenna for 5G communications.",
"title": ""
},
{
"docid": "a1623a10e06537a038ce3eaa1cfbeed7",
"text": "We present a simple zero-knowledge proof of knowledge protocol of which many protocols in the literature are instantiations. These include Schnorr’s protocol for proving knowledge of a discrete logarithm, the Fiat-Shamir and Guillou-Quisquater protocols for proving knowledge of a modular root, protocols for proving knowledge of representations (like Okamoto’s protocol), protocols for proving equality of secret values, a protocol for proving the correctness of a Diffie-Hellman key, protocols for proving the multiplicative relation of three commitments (as required in secure multi-party computation), and protocols used in credential systems. This shows that a single simple treatment (and proof), at a high level of abstraction, can replace the individual previous treatments. Moreover, one can devise new instantiations of the protocol.",
"title": ""
},
{
"docid": "db8cbcc8a7d233d404a18a54cb9fedae",
"text": "Edge preserving filters preserve the edges and its information while blurring an image. In other words they are used to smooth an image, while reducing the edge blurring effects across the edge like halos, phantom etc. They are nonlinear in nature. Examples are bilateral filter, anisotropic diffusion filter, guided filter, trilateral filter etc. Hence these family of filters are very useful in reducing the noise in an image making it very demanding in computer vision and computational photography applications like denoising, video abstraction, demosaicing, optical-flow estimation, stereo matching, tone mapping, style transfer, relighting etc. This paper provides a concrete introduction to edge preserving filters starting from the heat diffusion equation in olden to recent eras, an overview of its numerous applications, as well as mathematical analysis, various efficient and optimized ways of implementation and their interrelationships, keeping focus on preserving the boundaries, spikes and canyons in presence of noise. Furthermore it provides a realistic notion for efficient implementation with a research scope for hardware realization for further acceleration.",
"title": ""
},
{
"docid": "a4a56e0647849c22b48e7e5dc3f3049b",
"text": "The paper describes a 2D sound source mapping system for a mobile robot. We developed a multiple sound sources localization method for a mobile robot with a 32 channel concentric microphone array. The system can separate multiple moving sound sources using direction localization. Directional localization and separation of different pressure sound sources is achieved using the delay and sum beam forming (DSBF) and the frequency band selection (FBS) algorithm. Sound sources were mapped by using a wheeled robot equipped with the microphone array. The robot localizes sounds direction on the move and estimates sound sources position using triangulation. Assuming the movement of sound sources, the system set a time limit and uses only the last few seconds data. By using the random sample consensus (RANSAC) algorithm for position estimation, we achieved 2D multiple sound source mapping from time limited data with high accuracy. Also, moving sound source separation is experimentally demonstrated with segments of the DSBF enhanced signal derived from the localization process",
"title": ""
},
{
"docid": "1a7cfc19e7e3f9baf15e4a7450338c33",
"text": "The degree to which perceptual awareness of threat stimuli and bodily states of arousal modulates neural activity associated with fear conditioning is unknown. We used functional magnetic neuroimaging (fMRI) to study healthy subjects and patients with peripheral autonomic denervation to examine how the expression of conditioning-related activity is modulated by stimulus awareness and autonomic arousal. In controls, enhanced amygdala activity was evident during conditioning to both \"seen\" (unmasked) and \"unseen\" (backward masked) stimuli, whereas insula activity was modulated by perceptual awareness of a threat stimulus. Absent peripheral autonomic arousal, in patients with autonomic denervation, was associated with decreased conditioning-related activity in insula and amygdala. The findings indicate that the expression of conditioning-related neural activity is modulated by both awareness and representations of bodily states of autonomic arousal.",
"title": ""
},
{
"docid": "58d7e76a4b960e33fc7b541d04825dc9",
"text": "The Internet of Things (IoT) is intended for ubiquitous connectivity among different entities or “things”. While its purpose is to provide effective and efficient solutions, security of the devices and network is a challenging issue. The number of devices connected along with the ad-hoc nature of the system further exacerbates the situation. Therefore, security and privacy has emerged as a significant challenge for the IoT. In this paper, we aim to provide a thorough survey related to the privacy and security challenges of the IoT. This document addresses these challenges from the perspective of technologies and architecture used. This work focuses also in IoT intrinsic vulnerabilities as well as the security challenges of various layers based on the security principles of data confidentiality, integrity and availability. This survey analyzes articles published for the IoT at the time and relates it to the security conjuncture of the field and its projection to the future.",
"title": ""
},
{
"docid": "86e16c911d9a381ca46225c65222177d",
"text": "Steep, soil-mantled hillslopes evolve through the downslope movement of soil, driven largely by slope-dependent ransport processes. Most landscape evolution models represent hillslope transport by linear diffusion, in which rates of sediment transport are proportional to slope, such that equilibrium hillslopes should have constant curvature between divides and channels. On many soil-mantled hillslopes, however, curvature appears to vary systematically, such that slopes are typically convex near the divide and become increasingly planar downslope. This suggests that linear diffusion is not an adequate model to describe the entire morphology of soil-mantled hillslopes. Here we show that the interaction between local disturbances (such as rainsplash and biogenic activity) and frictional and gravitational forces results in a diffusive transport law that depends nonlinearly on hillslope gradient. Our proposed transport law (1) approximates linear diffusion at low gradients and (2) indicates that sediment flux increases rapidly as gradient approaches a critical value. We calibrated and tested this transport law using high-resolution topographic data from the Oregon Coast Range. These data, obtained by airborne laser altimetry, allow us to characterize hillslope morphology at •2 m scale. At five small basins in our study area, hillslope curvature approaches zero with increasing gradient, consistent with our proposed nonlinear diffusive transport law. Hillslope gradients tend to cluster near values for which sediment flux increases rapidly with slope, such that large changes in erosion rate will correspond to small changes in gradient. Therefore average hillslope gradient is unlikely to be a reliable indicator of rates of tectonic forcing or baselevel owering. Where hillslope erosion is dominated by nonlinear diffusion, rates of tectonic forcing will be more reliably reflected in hillslope curvature near the divide rather than average hillslope gradient.",
"title": ""
}
] | scidocsrr |
4457e9caa09452b094b448ff520bf0ff | Estimation of Arrival Flight Delay and Delay Propagation in a Busy Hub-Airport | [
{
"docid": "feb649029daef80f2ecf33221571a0b1",
"text": "The National Airspace System (NAS) is a large and complex system with thousands of interrelated components: administration, control centers, airports, airlines, aircraft, passengers, etc. The complexity of the NAS creates many difficulties in management and control. One of the most pressing problems is flight delay. Delay creates high cost to airlines, complaints from passengers, and difficulties for airport operations. As demand on the system increases, the delay problem becomes more and more prominent. For this reason, it is essential for the Federal Aviation Administration to understand the causes of delay and to find ways to reduce delay. Major contributing factors to delay are congestion at the origin airport, weather, increasing demand, and air traffic management (ATM) decisions such as the Ground Delay Programs (GDP). Delay is an inherently stochastic phenomenon. Even if all known causal factors could be accounted for, macro-level national airspace system (NAS) delays could not be predicted with certainty from micro-level aircraft information. This paper presents a stochastic model that uses Bayesian Networks (BNs) to model the relationships among different components of aircraft delay and the causal factors that affect delays. A case study on delays of departure flights from Chicago O’Hare international airport (ORD) to Hartsfield-Jackson Atlanta International Airport (ATL) reveals how local and system level environmental and human-caused factors combine to affect components of delay, and how these components contribute to the final arrival delay at the destination airport.",
"title": ""
}
] | [
{
"docid": "736a454a8aa08edf645312cecc7925c3",
"text": "This paper describes an <i>analogy ontology</i>, a formal representation of some key ideas in analogical processing, that supports the integration of analogical processing with first-principles reasoners. The ontology is based on Gentner's <i>structure-mapping</i> theory, a psychological account of analogy and similarity. The semantics of the ontology are enforced via procedural attachment, using cognitive simulations of structure-mapping to provide analogical processing services. Queries that include analogical operations can be formulated in the same way as standard logical inference, and analogical processing systems in turn can call on the services of first-principles reasoners for creating cases and validating their conjectures. We illustrate the utility of the analogy ontology by demonstrating how it has been used in three systems: A crisis management analogical reasoner that answers questions about international incidents, a course of action analogical critiquer that provides feedback about military plans, and a comparison question-answering system for knowledge capture. These systems rely on large, general-purpose knowledge bases created by other research groups, thus demonstrating the generality and utility of these ideas.",
"title": ""
},
{
"docid": "3c29a0579a2f7d4f010b9b2f2df16e2c",
"text": "In recent years research on human activity recognition using wearable sensors has enabled to achieve impressive results on real-world data. However, the most successful activity recognition algorithms require substantial amounts of labeled training data. The generation of this data is not only tedious and error prone but also limits the applicability and scalability of today's approaches. This paper explores and systematically analyzes two different techniques to significantly reduce the required amount of labeled training data. The first technique is based on semi-supervised learning and uses self-training and co-training. The second technique is inspired by active learning. In this approach the system actively asks which data the user should label. With both techniques, the required amount of training data can be reduced significantly while obtaining similar and sometimes even better performance than standard supervised techniques. The experiments are conducted using one of the largest and richest currently available datasets.",
"title": ""
},
{
"docid": "c3ba6fea620b410d5b6d9b07277d431e",
"text": "Nanonetworks, i.e., networks of nano-sized devices, are the enabling technology of long-awaited applications in the biological, industrial and military fields. For the time being, the size and power constraints of nano-devices limit the applicability of classical wireless communication in nanonetworks. Alternatively, nanomaterials can be used to enable electromagnetic (EM) communication among nano-devices. In this paper, a novel graphene-based nano-antenna, which exploits the behavior of Surface Plasmon Polariton (SPP) waves in semi-finite size Graphene Nanoribbons (GNRs), is proposed, modeled and analyzed. First, the conductivity of GNRs is analytically and numerically studied by starting from the Kubo formalism to capture the impact of the electron lateral confinement in GNRs. Second, the propagation of SPP waves in GNRs is analytically and numerically investigated, and the SPP wave vector and propagation length are computed. Finally, the nano-antenna is modeled as a resonant plasmonic cavity, and its frequency response is determined. The results show that, by exploiting the high mode compression factor of SPP waves in GNRs, graphene-based plasmonic nano-antennas are able to operate at much lower frequencies than their metallic counterparts, e.g., the Terahertz Band for a one-micrometer-long ten-nanometers-wide antenna. This result has the potential to enable EM communication in nanonetworks.",
"title": ""
},
{
"docid": "f753712eed9e5c210810d2afd1366eb8",
"text": "To improve FPGA performance for arithmetic circuits that are dominated by multi-input addition operations, an FPGA logic block is proposed that can be configured as a 6:2 or 7:2 compressor. Compressors have been used successfully in the past to realize parallel multipliers in VLSI technology; however, the peculiar structure of FPGA logic blocks, coupled with the high cost of the routing network relative to ASIC technology, renders compressors ineffective when mapped onto the general logic of an FPGA. On the other hand, current FPGA logic cells have already been enhanced with carry chains to improve arithmetic functionality, for example, to realize fast ternary carry-propagate addition. The contribution of this article is a new FPGA logic cell that is specialized to help realize efficient compressor trees on FPGAs. The new FPGA logic cell has two variants that can respectively be configured as a 6:2 or a 7:2 compressor using additional carry chains that, coupled with lookup tables, provide the necessary functionality. Experiments show that the use of these modified logic cells significantly reduces the delay of compressor trees synthesized on FPGAs compared to state-of-the-art synthesis techniques, with a moderate increase in area and power consumption.",
"title": ""
},
{
"docid": "b15078182915859c3eab4b174115cd0f",
"text": "We consider retrieving a specific temporal segment, or moment, from a video given a natural language text description. Methods designed to retrieve whole video clips with natural language determine what occurs in a video but not when. To address this issue, we propose the Moment Context Network (MCN) which effectively localizes natural language queries in videos by integrating local and global video features over time. A key obstacle to training our MCN model is that current video datasets do not include pairs of localized video segments and referring expressions, or text descriptions which uniquely identify a corresponding moment. Therefore, we collect the Distinct Describable Moments (DiDeMo) dataset which consists of over 10,000 unedited, personal videos in diverse visual settings with pairs of localized video segments and referring expressions. We demonstrate that MCN outperforms several baseline methods and believe that our initial results together with the release of DiDeMo will inspire further research on localizing video moments with natural language.",
"title": ""
},
{
"docid": "bf7b3cdb178fd1969257f56c0770b30b",
"text": "Relation Extraction is an important subtask of Information Extraction which has the potential of employing deep learning (DL) models with the creation of large datasets using distant supervision. In this review, we compare the contributions and pitfalls of the various DL models that have been used for the task, to help guide the path ahead.",
"title": ""
},
{
"docid": "d3d471b6b377d8958886a2f6c89d5061",
"text": "In common Web-based search interfaces, it can be difficult to formulate queries that simultaneously combine temporal, spatial, and topical data filters. We investigate how coordinated visualizations can enhance search and exploration of information on the World Wide Web by easing the formulation of these types of queries. Drawing from visual information seeking and exploratory search, we introduce VisGets - interactive query visualizations of Web-based information that operate with online information within a Web browser. VisGets provide the information seeker with visual overviews of Web resources and offer a way to visually filter the data. Our goal is to facilitate the construction of dynamic search queries that combine filters from more than one data dimension. We present a prototype information exploration system featuring three linked VisGets (temporal, spatial, and topical), and used it to visually explore news items from online RSS feeds.",
"title": ""
},
{
"docid": "a0acd4870951412fa31bc7803f927413",
"text": "Surprisingly little is understood about the physiologic and pathologic processes that involve intraoral sebaceous glands. Neoplasms are rare. Hyperplasia of these glands is undoubtedly more common, but criteria for the diagnosis of intraoral sebaceous hyperplasia have not been established. These lesions are too often misdiagnosed as large \"Fordyce granules\" or, when very large, as sebaceous adenomas. On the basis of a series of 31 nonneoplastic sebaceous lesions and on published data, the following definition is proposed: intraoral sebaceous hyperplasia occurs when a lesion, judged clinically to be a distinct abnormality that requires biopsy for diagnosis or confirmation of clinical impression, has histologic features of one or more well-differentiated sebaceous glands that exhibit no fewer than 15 lobules per gland. Sebaceous glands with fewer than 15 lobules that form an apparently distinct clinical lesion on the buccal mucosa are considered normal, whereas similar lesions of other intraoral sites are considered ectopic sebaceous glands. Sebaceous adenomas are less differentiated than sebaceous hyperplasia.",
"title": ""
},
{
"docid": "5a583fe6fae9f0624bcde5043c56c566",
"text": "In this paper, a microstrip dipole antenna on a flexible organic substrate is proposed. The antenna arms are tilted to make different variations of the dipole with more compact size and almost same performance. The antennas are fed using a coplanar stripline (CPS) geometry (Simons, 2001). The antennas are then conformed over cylindrical surfaces and their performances are compared to their flat counterparts. Good performance is achieved for both the flat and conformal antennas.",
"title": ""
},
{
"docid": "8b519431416a4bac96a8a975d8973ef9",
"text": "A recent and very promising approach for combinatorial optimization is to embed local search into the framework of evolutionary algorithms. In this paper, we present such hybrid algorithms for the graph coloring problem. These algorithms combine a new class of highly specialized crossover operators and a well-known tabu search algorithm. Experiments of such a hybrid algorithm are carried out on large DIMACS Challenge benchmark graphs. Results prove very competitive with and even better than those of state-of-the-art algorithms. Analysis of the behavior of the algorithm sheds light on ways to further improvement.",
"title": ""
},
{
"docid": "0b64a9277c3ad2713a14f0c9ab02fd81",
"text": "Insulin-like growth factor 2 (IGF2) is a 7.5 kDa mitogenic peptide hormone expressed by liver and many other tissues. It is three times more abundant in serum than IGF1, but our understanding of its physiological and pathological roles has lagged behind that of IGF1. Expression of the IGF2 gene is strictly regulated. Over-expression occurs in many cancers and is associated with a poor prognosis. Elevated serum IGF2 is also associated with increased risk of developing various cancers including colorectal, breast, prostate and lung. There is established clinical utility for IGF2 measurement in the diagnosis of non-islet cell tumour hypoglycaemia, a condition characterised by a molar IGF2:IGF1 ratio O10. Recent advances in understanding of the pathophysiology of IGF2 in cancer have suggested much novel clinical utility for its measurement. Measurement of IGF2 in blood and genetic and epigenetic tests of the IGF2 gene may help assess cancer risk and prognosis. Further studies will determine whether these tests enter clinical practice. New therapeutic approaches are being developed to target IGF2 action. This review provides a clinical perspective on IGF2 and an update on recent research findings. Key Words",
"title": ""
},
{
"docid": "c16499b3945603d04cf88fec7a2c0a85",
"text": "Recovering structure and motion parameters given a image pair or a sequence of images is a well studied problem in computer vision. This is often achieved by employing Structure from Motion (SfM) or Simultaneous Localization and Mapping (SLAM) algorithms based on the real-time requirements. Recently, with the advent of Convolutional Neural Networks (CNNs) researchers have explored the possibility of using machine learning techniques to reconstruct the 3D structure of a scene and jointly predict the camera pose. In this work, we present a framework that achieves state-of-the-art performance on single image depth prediction for both indoor and outdoor scenes. The depth prediction system is then extended to predict optical flow and ultimately the camera pose and trained end-to-end. Our framework outperforms previous deep-learning based motion prediction approaches, and we also demonstrate that the state-of-the-art metric depths can be further improved using the knowledge of pose.",
"title": ""
},
{
"docid": "84d0f682d23d0f54789f83a0d68f4b0e",
"text": "AIM\nTetracycline-stained tooth structure is difficult to bleach using nightguard tray methods. The possible benefits of in-office light-accelerated bleaching systems based on the photo-Fenton reaction are of interest as possible adjunctive treatments. This study was a proof of concept for possible benefits of this approach, using dentine slabs from human tooth roots stained in a reproducible manner with the tetracycline antibiotic demeclocycline hydrochloride.\n\n\nMATERIALS AND METHODS\nColor changes overtime in tetra-cycline stained roots from single rooted teeth treated using gel (Zoom! WhiteSpeed(®)) alone, blue LED light alone, or gel plus light in combination were tracked using standardized digital photography. Controls received no treatment. Changes in color channel data were tracked overtime, for each treatment group (N = 20 per group).\n\n\nRESULTS\nDentin was lighter after bleaching, with significant improvements in the dentin color for the blue channel (yellow shade) followed by the green channel and luminosity. The greatest changes occurred with gel activated by light (p < 0.0001), which was superior to effects seen with gel alone. Use of the light alone did not significantly alter shade.\n\n\nCONCLUSION\nThis proof of concept study demonstrates that bleaching using the photo-Fenton chemistry is capable of lightening tetracycline-stained dentine. Further investigation of the use of this method for treating tetracycline-stained teeth in clinical settings appears warranted.\n\n\nCLINICAL SIGNIFICANCE\nBecause tetracycline staining may respond to bleaching treatments based on the photo-Fenton reaction, systems, such as Zoom! WhiteSpeed, may have benefits as adjuncts to home bleaching for patients with tetracycline-staining.",
"title": ""
},
{
"docid": "cdb83e9a31172d6687622dc7ac841c91",
"text": "Introduction Various forms of social media are used by many mothers to maintain social ties and manage the stress associated with their parenting roles and responsibilities. ‘Mommy blogging’ as a specific type of social media usage is a common and growing phenomenon, but little is known about mothers’ blogging-related experiences and how these may contribute to their wellbeing. This exploratory study investigated the blogging-related motivations and goals of Australian mothers. Methods An online survey was emailed to members of an Australian online parenting community. The survey included open-ended questions that invited respondents to discuss their motivations and goals for blogging. A thematic analysis using a grounded approach was used to analyze the qualitative data obtained from 235 mothers. Results Five primary motivations for blogging were identified: developing connections with others, experiencing heightened levels of mental stimulation, achieving self-validation, contributing to the welfare of others, and extending skills and abilities. Discussion These motivations are discussed in terms of their various properties and dimensions to illustrate how these mothers appear to use blogging to enhance their psychological wellbeing.",
"title": ""
},
{
"docid": "8ccca373252c045107753081db3de051",
"text": "We describe a computer system that provides a real-time musical accompaniment for a live soloist in a piece of non-improvised music for soloist and accompaniment. A Bayesian network is developed that represents the joint distribution on the times at which the solo and accompaniment notes are played, relating the two parts through a layer of hidden variables. The network is first constructed using the rhythmic information contained in the musical score. The network is then trained to capture the musical interpretations of the soloist and accompanist in an off-line rehearsal phase. During live accompaniment the learned distribution of the network is combined with a real-time analysis of the soloist's acoustic signal, performed with a hidden Markov model, to generate a musically principled accompaniment that respects all available sources of knowledge. A live demonstration will be provided.",
"title": ""
},
{
"docid": "291ece850c1c6afcda49ac2e8a74319e",
"text": "The aim of this paper is to explore how well the task of text vs. nontext distinction can be solved in online handwritten documents using only offline information. Two systems are introduced. The first system generates a document segmentation first. For this purpose, four methods originally developed for machine printed documents are compared: x-y cut, morphological closing, Voronoi segmentation, and whitespace analysis. A state-of-the art classifier then distinguishes between text and non-text zones. The second system follows a bottom-up approach that classifies connected components. Experiments are performed on a new dataset of online handwritten documents containing different content types in arbitrary arrangements. The best system assigns 94.3% of the pixels to the correct class.",
"title": ""
},
{
"docid": "6b49ccb6cb443c89fd32f407cb575653",
"text": "Recently, there has been a growing interest in end-to-end speech recognition that directly transcribes speech to text without any predefined alignments. In this paper, we explore the use of attention-based encoder-decoder model for Mandarin speech recognition on a voice search task. Previous attempts have shown that applying attention-based encoder-decoder to Mandarin speech recognition was quite difficult due to the logographic orthography of Mandarin, the large vocabulary and the conditional dependency of the attention model. In this paper, we use character embedding to deal with the large vocabulary. Several tricks are used for effective model training, including L2 regularization, Gaussian weight noise and frame skipping. We compare two attention mechanisms and use attention smoothing to cover long context in the attention model. Taken together, these tricks allow us to finally achieve a character error rate (CER) of 3.58% and a sentence error rate (SER) of 7.43% on the MiTV voice search dataset. While together with a trigram language model, CER and SER reach 2.81% and 5.77%, respectively.",
"title": ""
},
{
"docid": "14c653f1b4e29fd6cb6a0805471c0906",
"text": "3D object detection and pose estimation from a single image are two inherently ambiguous problems. Oftentimes, objects appear similar from different viewpoints due to shape symmetries, occlusion and repetitive textures. This ambiguity in both detection and pose estimation means that an object instance can be perfectly described by several different poses and even classes. In this work we propose to explicitly deal with this uncertainty. For each object instance we predict multiple pose and class outcomes to estimate the specific pose distribution generated by symmetries and repetitive textures. The distribution collapses to a single outcome when the visual appearance uniquely identifies just one valid pose. We show the benefits of our approach which provides not only a better explanation for pose ambiguity, but also a higher accuracy in terms of pose estimation.",
"title": ""
},
{
"docid": "ebe138de5aec0be8cb2e80adb8d59246",
"text": "In recent years, online reviews have become the most important resource of customers’ opinions. These reviews are used increasingly by individuals and organizations to make purchase and business decisions. Unfortunately, driven by the desire for profit or publicity, fraudsters have produced deceptive (spam) reviews. The fraudsters’ activities mislead potential customers and organizations reshaping their businesses and prevent opinion-mining techniques from reaching accurate conclusions. The present research focuses on systematically analyzing and categorizingmodels that detect review spam. Next, the study proceeds to assess them in terms of accuracy and results. We find that studies can be categorized into three groups that focus on methods to detect spam reviews, individual spammers and group spam. Different detection techniques have different strengths and weaknesses and thus favor different detection contexts. 2014 Published by Elsevier Ltd.",
"title": ""
},
{
"docid": "e5b2857bfe745468453ef9dabbf5c527",
"text": "We assume that a high-dimensional datum, like an image, is a compositional expression of a set of properties, with a complicated non-linear relationship between the datum and its properties. This paper proposes a factorial mixture prior for capturing latent properties, thereby adding structured compositionality to deep generative models. The prior treats a latent vector as belonging to Cartesian product of subspaces, each of which is quantized separately with a Gaussian mixture model. Some mixture components can be set to represent properties as observed random variables whenever labeled properties are present. Through a combination of stochastic variational inference and gradient descent, a method for learning how to infer discrete properties in an unsupervised or semi-supervised way is outlined and empirically evaluated.",
"title": ""
}
] | scidocsrr |
de73727725559471811181920e733481 | Moving average reversion strategy for on-line portfolio selection | [
{
"docid": "dc187c1fb2af0cfdf0d39295151f9075",
"text": "Online portfolio selection is a fundamental problem in computational finance, which has been extensively studied across several research communities, including finance, statistics, artificial intelligence, machine learning, and data mining. This article aims to provide a comprehensive survey and a structural understanding of online portfolio selection techniques published in the literature. From an online machine learning perspective, we first formulate online portfolio selection as a sequential decision problem, and then we survey a variety of state-of-the-art approaches, which are grouped into several major categories, including benchmarks, Follow-the-Winner approaches, Follow-the-Loser approaches, Pattern-Matching--based approaches, and Meta-Learning Algorithms. In addition to the problem formulation and related algorithms, we also discuss the relationship of these algorithms with the capital growth theory so as to better understand the similarities and differences of their underlying trading ideas. This article aims to provide a timely and comprehensive survey for both machine learning and data mining researchers in academia and quantitative portfolio managers in the financial industry to help them understand the state of the art and facilitate their research and practical applications. We also discuss some open issues and evaluate some emerging new trends for future research.",
"title": ""
}
] | [
{
"docid": "ae92750b161381ac02c8600eb4c93beb",
"text": "Textual-based password authentication scheme tends to be more vulnerable to attacks such as shouldersurfing and hidden camera. To overcome the vulnerabilities of traditional methods, visual or graphical password schemes have been developed as possible alternative solutions to text-based password schemes. Because simply adopting graphical password authentication also has some drawbacks, schemes using graphic and text have been developed. In this paper, we propose a hybrid password authentication scheme based on shape and text. It uses shapes of strokes on the grid as the origin passwords and allows users to login with text passwords via traditional input devices. The method provides strong resistant to hidden-camera and shoulder-surfing. Moreover, the scheme has high scalability and flexibility to enhance the authentication process security. The analysis of the security level of this approach is also discussed.",
"title": ""
},
{
"docid": "f5f6036fa3f8c16ad36b3c65794fc86b",
"text": "Cloud computing has become the buzzword in the industry today. Though, it is not an entirely new concept but in today’s digital age, it has become ubiquitous due to the proliferation of Internet, broadband, mobile devices, better bandwidth and mobility requirements for end-users (be it consumers, SMEs or enterprises). In this paper, the focus is on the perceived inclination of micro and small businesses (SMEs or SMBs) toward cloud computing and the benefits reaped by them. This paper presents five factors nfrastructure-as-a-Service (IaaS) mall and medium enterprises (SMEs’) mall and medium businesses (SMBs’) influencing the cloud usage by this business community, whose needs and business requirements are very different from large enterprises. Firstly, ease of use and convenience is the biggest favorable factor followed by security and privacy and then comes the cost reduction. The fourth factor reliability is ignored as SMEs do not consider cloud as reliable. Lastly but not the least, SMEs do not want to use cloud for sharing and collaboration and prefer their old conventional methods for sharing and collaborating with their stakeholders.",
"title": ""
},
{
"docid": "790ac9330d698cf5d6f3f8fc7891f090",
"text": "It is well known that the convergence rate of the expectation-maximization (EM) algorithm can be faster than those of convention first-order iterative algorithms when the overlap in the given mixture is small. But this argument has not been mathematically proved yet. This article studies this problem asymptotically in the setting of gaussian mixtures under the theoretical framework of Xu and Jordan (1996). It has been proved that the asymptotic convergence rate of the EM algorithm for gaussian mixtures locally around the true solution is o(e0.5()), where > 0 is an arbitrarily small number, o(x) means that it is a higher-order infinitesimal as x 0, and e() is a measure of the average overlap of gaussians in the mixture. In other words, the large sample local convergence rate for the EM algorithm tends to be asymptotically superlinear when e() tends to zero.",
"title": ""
},
{
"docid": "cb59a7493f6b9deee4691e6f97c93a1f",
"text": "AIMS AND OBJECTIVES\nThis integrative review of the literature addresses undergraduate nursing students' attitudes towards and use of research and evidence-based practice, and factors influencing this. Current use of research and evidence within practice, and the influences and perceptions of students in using these tools in the clinical setting are explored.\n\n\nBACKGROUND\nEvidence-based practice is an increasingly critical aspect of quality health care delivery, with nurses requiring skills in sourcing relevant information to guide the care they provide. Yet, barriers to engaging in evidence-based practice remain. To increase nurses' use of evidence-based practice within healthcare settings, the concepts and skills required must be introduced early in their career. To date, however, there is little evidence to show if and how this inclusion makes a difference.\n\n\nDESIGN\nIntegrative literature review.\n\n\nMETHODS\nProQuest, Summon, Science Direct, Ovid, CIAP, Google scholar and SAGE databases were searched, and Snowball search strategies used. One hundred and eighty-one articles were reviewed. Articles were then discarded for irrelevance. Nine articles discussed student attitudes and utilisation of research and evidence-based practice.\n\n\nRESULTS\nFactors surrounding the attitudes and use of research and evidence-based practice were identified, and included the students' capability beliefs, the students' attitudes, and the attitudes and support capabilities of wards/preceptors.\n\n\nCONCLUSIONS\nUndergraduate nursing students are generally positive toward using research for evidence-based practice, but experience a lack of support and opportunity. These students face cultural and attitudinal disadvantage, and lack confidence to practice independently. Further research and collaboration between educational facilities and clinical settings may improve utilisation.\n\n\nRELEVANCE TO CLINICAL PRACTICE\nThis paper adds further discussion to the topic from the perspective of and including influences surrounding undergraduate students and new graduate nurses.",
"title": ""
},
{
"docid": "a09fb2b15ebf81006ccda273a141412a",
"text": "Computing containment relations between massive collections of sets is a fundamental operation in data management, for example in graph analytics and data mining applications. Motivated by recent hardware trends, in this paper we present two novel solutions for computing set-containment joins over massive sets: the Patricia Trie-based Signature Join (PTSJ) and PRETTI+, a Patricia trie enhanced extension of the state-of-the-art PRETTI join. The compact trie structure not only enables efficient use of main-memory, but also significantly boosts the performance of both approaches. By carefully analyzing the algorithms and conducting extensive experiments with various synthetic and real-world datasets, we show that, in many practical cases, our algorithms are an order of magnitude faster than the state-of-the-art.",
"title": ""
},
{
"docid": "b0991cd60b3e94c0ed3afede89e13f36",
"text": "It has been established that incorporating word cluster features derived from large unlabeled corpora can significantly improve prediction of linguistic structure. While previous work has focused primarily on English, we extend these results to other languages along two dimensions. First, we show that these results hold true for a number of languages across families. Second, and more interestingly, we provide an algorithm for inducing cross-lingual clusters and we show that features derived from these clusters significantly improve the accuracy of cross-lingual structure prediction. Specifically, we show that by augmenting direct-transfer systems with cross-lingual cluster features, the relative error of delexicalized dependency parsers, trained on English treebanks and transferred to foreign languages, can be reduced by up to 13%. When applying the same method to direct transfer of named-entity recognizers, we observe relative improvements of up to 26%.",
"title": ""
},
{
"docid": "19aa8d26eae39aa1360aba38aaefc29e",
"text": "We present a matrix factorization model inspired by challenges we encountered while working on the Xbox movies recommendation system. The item catalog in a recommender system is typically equipped with meta-data features in the form of labels. However, only part of these features are informative or useful with regard to collaborative filtering. By incorporating a novel sparsity prior on feature parameters, the model automatically discerns and utilizes informative features while simultaneously pruning non-informative features.\n The model is designed for binary feedback, which is common in many real-world systems where numeric rating data is scarce or non-existent. However, the overall framework is applicable to any likelihood function. Model parameters are estimated with a Variational Bayes inference algorithm, which is robust to over-fitting and does not require cross-validation and fine tuning of regularization coefficients. The efficacy of our method is illustrated on a sample from the Xbox movies dataset as well as on the publicly available MovieLens dataset. In both cases, the proposed solution provides superior predictive accuracy, especially for long-tail items. We then demonstrate the feature selection capabilities and compare against the common case of simple Gaussian priors. Finally, we show that even without features, our model performs better than a baseline model trained with the popular stochastic gradient descent approach.",
"title": ""
},
{
"docid": "601748e27c7b3eefa4ff29252b42bf93",
"text": "A simple, fast method is presented for the interpolation of texture coordinates and shading parameters for polygons viewed in perspective. The method has application in scan conversion algorithms like z-bu er and painter's algorithms that perform screen space interpolation of shading parameters such as texture coordinates, colors, and normal vectors. Some previous methods perform linear interpolation in screen space, but this is rotationally variant, and in the case of texture mapping, causes a disturbing \\rubber sheet\" e ect. To correctly compute the nonlinear, projective transformation between screen space and parameter space, we use rational linear interpolation across the polygon, performing several divisions at each pixel. We present simpler formulas for setting up these interpolation computations, reducing the setup cost per polygon to nil and reducing the cost per vertex to a handful of divisions. Additional keywords: incremental, perspective, projective, a ne.",
"title": ""
},
{
"docid": "77059bf4b66792b4f34bc78bbb0b373a",
"text": "Cloud computing systems host most of today's commercial business applications yielding it high revenue which makes it a target of cyber attacks. This emphasizes the need for a digital forensic mechanism for the cloud environment. Conventional digital forensics cannot be directly presented as a cloud forensic solution due to the multi tenancy and virtualization of resources prevalent in cloud. While we do cloud forensics, the data to be inspected are cloud component logs, virtual machine disk images, volatile memory dumps, console logs and network captures. In this paper, we have come up with a remote evidence collection and pre-processing framework using Struts and Hadoop distributed file system. Collection of VM disk images, logs etc., are initiated through a pull model when triggered by the investigator, whereas cloud node periodically pushes network captures to HDFS. Pre-processing steps such as clustering and correlation of logs and VM disk images are carried out through Mahout and Weka to implement cross drive analysis.",
"title": ""
},
{
"docid": "03f99359298276cb588eb8fa85f1e83e",
"text": "In recent years, there has been a growing interest in the wireless sensor networks (WSN) for a variety of applications such as the localization and real time positioning. Different approaches based on artificial intelligence are applied to solve common issues in WSN and improve network performance. This paper addresses a survey on machine learning techniques for localization in WSNs using Received Signal Strength Indicator.",
"title": ""
},
{
"docid": "c4616ae56dd97595f63b60abc2bea55c",
"text": "Driven by the challenges of rapid urbanization, cities are determined to implement advanced socio-technological changes and transform into smarter cities. The success of such transformation, however, greatly relies on a thorough understanding of the city's states of spatiotemporal flux. The ability to understand such fluctuations in context and in terms of interdependencies that exist among various entities across time and space is crucial, if cities are to maintain their smart growth. Here, we introduce a Smart City Digital Twin paradigm that can enable increased visibility into cities' human-infrastructure-technology interactions, in which spatiotemporal fluctuations of the city are integrated into an analytics platform at the real-time intersection of reality-virtuality. Through learning and exchange of spatiotemporal information with the city, enabled through virtualization and the connectivity offered by Internet of Things (IoT), this Digital Twin of the city becomes smarter over time, able to provide predictive insights into the city's smarter performance and growth.",
"title": ""
},
{
"docid": "14fe96edca3ae38979c5d72f1d8aef40",
"text": "How can prior knowledge on the transformation invariances of a domain be incorporated into the architecture of a neural network? We propose Equivariant Transformers (ETs), a family of differentiable image-to-image mappings that improve the robustness of models towards pre-defined continuous transformation groups. Through the use of specially-derived canonical coordinate systems, ETs incorporate functions that are equivariant by construction with respect to these transformations. We show empirically that ETs can be flexibly composed to improve model robustness towards more complicated transformation groups in several parameters. On a real-world image classification task, ETs improve the sample efficiency of ResNet classifiers, achieving relative improvements in error rate of up to 15% in the limited data regime while increasing model parameter count by less than 1%.",
"title": ""
},
{
"docid": "15208617386aeb77f73ca7c2b7bb2656",
"text": "Multiplication is the basic building block for several DSP processors, Image processing and many other. Over the years the computational complexities of algorithms used in Digital Signal Processors (DSPs) have gradually increased. This requires a parallel array multiplier to achieve high execution speed or to meet the performance demands. A typical implementation of such an array multiplier is Braun design. Braun multiplier is a type of parallel array multiplier. The architecture of Braun multiplier mainly consists of some Carry Save Adders, array of AND gates and one Ripple Carry Adder. In this research work, a new design of Braun Multiplier is proposed and this proposed design of multiplier uses a very fast parallel prefix adder ( Kogge Stone Adder) in place of Ripple Carry Adder. The architecture of standard Braun Multiplier is modified in this work for reducing the delay due to Ripple Carry Adder and performing faster multiplication of two binary numbers. This research also presents a comparative study of FPGA implementation on Spartan2 and Spartartan2E for new multiplier design and standard braun multiplier. The RTL design of proposed new Braun Multiplier and standard braun multiplier is done using Verilog HDL. The simulation is performed using ModelSim. The Xilinx ISE design tool is used for FPGA implementation. Comparative result shows the modified design is effective when compared in terms of delay with the standard design.",
"title": ""
},
{
"docid": "17eded575bf5e123030b93ec5dc19bc5",
"text": "Our research is aimed at developing a quantitative approach for assessing supply chain resilience to disasters, a topic that has been discussed primarily in a qualitative manner in the literature. For this purpose, we propose a simulation-based framework that incorporates concepts of resilience into the process of supply chain design. In this context, resilience is defined as the ability of a supply chain system to reduce the probabilities of disruptions, to reduce the consequences of those disruptions, and to reduce the time to recover normal performance. The decision framework incorporates three determinants of supply chain resilience (density, complexity, and node criticality) and discusses their relationship to the occurrence of disruptions, to the impacts of those disruptions on the performance of a supply chain system and to the time needed for recovery. Different preliminary strategies for evaluating supply chain resilience to disasters are identified, and directions for future research are discussed.",
"title": ""
},
{
"docid": "5b4045a80ae584050a9057ba32c9296b",
"text": "Electro-rheological (ER) fluids are smart fluids which can transform into solid-like phase by applying an electric field. This process is reversible and can be strategically used to build fluidic components for innovative soft robots capable of soft locomotion. In this work, we show the potential applications of ER fluids to build valves that simplify design of fluidic based soft robots. We propose the design and development of a composite ER valve, aimed at controlling the flexibility of soft robots bodies by controlling the ER fluid flow. We present how an ad hoc number of such soft components can be embodied in a simple crawling soft robot (Wormbot); in a locomotion mechanism capable of forward motion through rotation; and, in a tendon driven continuum arm. All these embodiments show how simplification of the hydraulic circuits relies on the simple structure of ER valves. Finally, we address preliminary experiments to characterize the behavior of Wormbot in terms of actuation forces.",
"title": ""
},
{
"docid": "e060548f90eb06f359b2d8cfcf713c29",
"text": "Objective\nTo conduct a systematic review of deep learning models for electronic health record (EHR) data, and illustrate various deep learning architectures for analyzing different data sources and their target applications. We also highlight ongoing research and identify open challenges in building deep learning models of EHRs.\n\n\nDesign/method\nWe searched PubMed and Google Scholar for papers on deep learning studies using EHR data published between January 1, 2010, and January 31, 2018. We summarize them according to these axes: types of analytics tasks, types of deep learning model architectures, special challenges arising from health data and tasks and their potential solutions, as well as evaluation strategies.\n\n\nResults\nWe surveyed and analyzed multiple aspects of the 98 articles we found and identified the following analytics tasks: disease detection/classification, sequential prediction of clinical events, concept embedding, data augmentation, and EHR data privacy. We then studied how deep architectures were applied to these tasks. We also discussed some special challenges arising from modeling EHR data and reviewed a few popular approaches. Finally, we summarized how performance evaluations were conducted for each task.\n\n\nDiscussion\nDespite the early success in using deep learning for health analytics applications, there still exist a number of issues to be addressed. We discuss them in detail including data and label availability, the interpretability and transparency of the model, and ease of deployment.",
"title": ""
},
{
"docid": "521fd4ce53761c9bda64b13a91513c18",
"text": "The importance of organizational agility in a competitive environment is nowadays widely recognized and accepted. However, despite this awareness, the availability of tools and methods that support an organization in assessing and improving their organizational agility is scarce. Therefore, this study introduces the Organizational Agility Maturity Model in order to provide an easy-to-use yet powerful assessment tool for organizations in the software and IT service industry. Based on a design science research approach with a comprehensive literature review and an empirical investigation utilizing factor analysis, both scientific rigor as well as practical relevance is ensured. The applicability is further demonstrated by a cluster analysis identifying patterns of organizational agility that fit to the maturity model. The Organizational Agility Maturity Model further contributes to the field by providing a theoretically and empirically grounded structure of organizational agility supporting the efforts of developing a common understanding of the concept.",
"title": ""
},
{
"docid": "44dfc8c3c5c1f414197ad7cd8dedfb2e",
"text": "In this paper, we propose a framework for formation stabilization of multiple autonomous vehicles in a distributed fashion. Each vehicle is assumed to have simple dynamics, i.e. a double-integrator, with a directed (or an undirected) information flow over the formation graph of the vehicles. Our goal is to find a distributed control law (with an efficient computational cost) for each vehicle that makes use of limited information regarding the state of other vehicles. Here, the key idea in formation stabilization is the use of natural potential functions obtained from structural constraints of a desired formation in a way that leads to a collision-free, distributed, and bounded state feedback law for each vehicle.",
"title": ""
},
{
"docid": "2acdc7dfe5ae0996ef0234ec51a34fe5",
"text": "The on-line or automatic visual inspection of PCB is basically a very first examination before its electronic testing. This inspection consists of mainly missing or wrongly placed components in the PCB. If there is any missing electronic component then it is not so damaging the PCB. But if any of the component that can be placed only in one way and has been soldered in other way around, then the same will be damaged and there are chances that other components may also get damaged. To avoid this, an automatic visual inspection is in demand that may take care of the missing or wrongly placed electronic components. In the presented paper work, an automatic machine vision system for inspection of PCBs for any missing component as compared with the standard one has been proposed. The system primarily consists of two parts: 1) the learning process, where the system is trained for the standard PCB, and 2) inspection process where the PCB under test is inspected for any missing component as compared with the standard one. The proposed system can be deployed on a manufacturing line with a much more affordable price comparing to other commercial inspection systems.",
"title": ""
}
] | scidocsrr |
f0431a47bb75b36308a735769caad188 | Stacked convolutional auto-encoders for steganalysis of digital images | [
{
"docid": "f0b522d7f3a0eeb6cb951356407cf15a",
"text": "Today, the most accurate steganalysis methods for digital media are built as supervised classifiers on feature vectors extracted from the media. The tool of choice for the machine learning seems to be the support vector machine (SVM). In this paper, we propose an alternative and well-known machine learning tool-ensemble classifiers implemented as random forests-and argue that they are ideally suited for steganalysis. Ensemble classifiers scale much more favorably w.r.t. the number of training examples and the feature dimensionality with performance comparable to the much more complex SVMs. The significantly lower training complexity opens up the possibility for the steganalyst to work with rich (high-dimensional) cover models and train on larger training sets-two key elements that appear necessary to reliably detect modern steganographic algorithms. Ensemble classification is portrayed here as a powerful developer tool that allows fast construction of steganography detectors with markedly improved detection accuracy across a wide range of embedding methods. The power of the proposed framework is demonstrated on three steganographic methods that hide messages in JPEG images.",
"title": ""
},
{
"docid": "33069cfad58493e2f2fdd3effcdf0279",
"text": "Recent findings [HOT06] have made possible the learning of deep layered hierarchical representations of data mimicking the brains working. It is hoped that this paradigm will unlock some of the power of the brain and lead to advances towards true AI. In this thesis I implement and evaluate state-of-the-art deep learning models and using these as building blocks I investigate the hypothesis that predicting the time-to-time sensory input is a good learning objective. I introduce the Predictive Encoder (PE) and show that a simple non-regularized learning rule, minimizing prediction error on natural video patches leads to receptive fields similar to those found in Macaque monkey visual area V1. I scale this model to video of natural scenes by introducing the Convolutional Predictive Encoder (CPE) and show similar results. Both models can be used in deep architectures as a deep learning module.",
"title": ""
}
] | [
{
"docid": "0241cef84d46b942ee32fc7345874b90",
"text": "A total of eight appendices (Appendix 1 through Appendix 8) and an associated reference for these appendices have been placed here. In addition, there is currently a search engine located at to assist users in identifying BPR techniques and tools.",
"title": ""
},
{
"docid": "30817500bafa489642779975875e270f",
"text": "We consider the hypothesis testing problem of detecting a shift between the means of two multivariate normal distributions in the high-dimensional setting, allowing for the data dimension p to exceed the sample size n. Our contribution is a new test statistic for the two-sample test of means that integrates a random projection with the classical Hotelling T 2 statistic. Working within a high-dimensional framework that allows (p, n) → ∞, we first derive an asymptotic power function for our test, and then provide sufficient conditions for it to achieve greater power than other state-of-the-art tests. Using ROC curves generated from simulated data, we demonstrate superior performance against competing tests in the parameter regimes anticipated by our theoretical results. Lastly, we illustrate an advantage of our procedure with comparisons on a high-dimensional gene expression dataset involving the discrimination of different types of cancer.",
"title": ""
},
{
"docid": "ef1c42ff8348aa9c20a65dafdb98e93e",
"text": "This study investigates the influence of online news and clickbait headlines on online users’ emotional arousal and behavior. An experiment was conducted to examine the level of arousal in three online news headline groups—news headlines, clickbait headlines, and control headlines. Arousal was measured by two different measurement approaches—pupillary response recorded by an eye-tracking device and selfassessment manikin (SAM) reported in a survey. Overall, the findings suggest that certain clickbait headlines can evoke users’ arousal which subsequently drives intention to read news stories. Arousal scores assessed by the pupillary response and SAM are consistent when the level of emotional arousal is high.",
"title": ""
},
{
"docid": "8d07f52f154f81ce9dedd7c5d7e3182d",
"text": "We present a 3D face reconstruction system that takes as input either one single view or several different views. Given a facial image, we first classify the facial pose into one of five predefined poses, then detect two anchor points that are then used to detect a set of predefined facial landmarks. Based on these initial steps, for a single view we apply a warping process using a generic 3D face model to build a 3D face. For multiple views, we apply sparse bundle adjustment to reconstruct 3D landmarks which are used to deform the generic 3D face model. Experimental results on the Color FERET and CMU multi-PIE databases confirm our framework is effective in creating realistic 3D face models that can be used in many computer vision applications, such as 3D face recognition at a distance.",
"title": ""
},
{
"docid": "8d9246e7780770b5f7de9ef0adbab3e6",
"text": "This paper proposes a self-adaption Kalman observer (SAKO) used in a permanent-magnet synchronous motor (PMSM) servo system. The proposed SAKO can make up measurement noise of the absolute encoder with limited resolution ratio and avoid differentiating process and filter delay of the traditional speed measuring methods. To be different from the traditional Kalman observer, the proposed observer updates the gain matrix by calculating the measurement noise at the current time. The variable gain matrix is used to estimate and correct the observed position, speed, and load torque to solve the problem that the motor speed calculated by the traditional methods is prone to large speed error and time delay when PMSM runs at low speeds. The state variables observed by the proposed observer are used as the speed feedback signals and compensation signal of the load torque disturbance in PMSM servo system. The simulations and experiments prove that the SAKO can observe speed and load torque precisely and timely and that the feedforward and feedback control system of PMSM can improve the speed tracking ability.",
"title": ""
},
{
"docid": "984f7a2023a14efbbd5027abfc12a586",
"text": "Name ambiguity stems from the fact that many people or objects share identical names in the real world. Such name ambiguity decreases the performance of document retrieval, Web search, information integration, and may cause confusion in other applications. Due to the same name spellings and lack of information, it is a nontrivial task to distinguish them accurately. In this article, we focus on investigating the problem in digital libraries to distinguish publications written by authors with identical names. We present an effective framework named GHOST (abbreviation for GrapHical framewOrk for name diSambiguaTion), to solve the problem systematically. We devise a novel similarity metric, and utilize only one type of attribute (i.e., coauthorship) in GHOST. Given the similarity matrix, intermediate results are grouped into clusters with a recently introduced powerful clustering algorithm called Affinity Propagation. In addition, as a complementary technique, user feedback can be used to enhance the performance. We evaluated the framework on the real DBLP and PubMed datasets, and the experimental results show that GHOST can achieve both high precision and recall.",
"title": ""
},
{
"docid": "e0d8d1f65424080d538d87564783bdbb",
"text": "Many deals that look good on paper never materialize into value-creating endeavors. Often, the problem begins at the negotiating table. In fact, the very person everyone thinks is pivotal to a deal's success--the negotiator--is often the one who undermines it. That's because most negotiators have a deal maker mind-set: They see the signed contract as the final destination rather than the start of a cooperative venture. What's worse, most companies reward negotiators on the basis of the number and size of the deals they're signing, giving them no incentive to change. The author asserts that organizations and negotiators must transition from a deal maker mentality--which involves squeezing your counterpart for everything you can get--to an implementation mind-set--which sets the stage for a healthy working relationship long after the ink has dried. Achieving an implementation mind-set demands five new approaches. First, start with the end in mind: Negotiation teams should carry out a \"benefit of hindsight\" exercise to imagine what sorts of problems they'll have encountered 12 months down the road. Second, help your counterpart prepare. Surprise confers advantage only because the other side has no time to think through all the implications of a proposal. If they agree to something they can't deliver, it will affect you both. Third, treat alignment as a shared responsibility. After all, if the other side's interests aren't aligned, it's your problem, too. Fourth, send one unified message. Negotiators should brief implementation teams on both sides together so everyone has the same information. And fifth, manage the negotiation like a business exercise: Combine disciplined negotiation preparation with post-negotiation reviews. Above all, companies must remember that the best deals don't end at the negotiating table--they begin there.",
"title": ""
},
{
"docid": "07a048f6d960a3e11433bd10a4d40836",
"text": "This paper presents a survey of topological spatial logics, taking as its point of departure the interpretation of the modal logic S4 due to McKinsey and Tarski. We consider the effect of extending this logic with the means to represent topological connectedness, focusing principally on the issue of computational complexity. In particular, we draw attention to the special problems which arise when the logics are interpreted not over arbitrary topological spaces, but over (low-dimensional) Euclidean spaces.",
"title": ""
},
{
"docid": "05941fa5fe1d7728d9bce44f524ff17f",
"text": "legend N2D N1D 2LPEG N2D vs. 2LPEG N1D vs. 2LPEG EFFICACY Primary analysis set, n1⁄4 275 Primary analysis set, n1⁄4 275 Primary analysis set, n1⁄4 272 Primary endpoint: Patients with successful overall bowel cleansing efficacy (HCS) [n] 253 (92.0%) 245 (89.1%) 238 (87.5%) -4.00%* [0.055] -6.91%* [0.328] Supportive secondary endpoint: Patients with successful overall bowel cleansing efficacy (BBPS) [n] 249 (90.5%) 243 (88.4%) 232 (85.3%) n.a. n.a. Primary endpoint: Excellent plus Good cleansing rate in colon ascendens (primary analysis set) [n] 87 (31.6%) 93 (33.8%) 41 (15.1%) 8.11%* [50.001] 10.32%* [50.001] Key secondary endpoint: Adenoma detection rate, colon ascendens 11.6% 11.6% 8.1% -4.80%; 12.00%** [0.106] -4.80%; 12.00%** [0.106] Key secondary endpoint: Adenoma detection rate, overall colon 26.6% 27.6% 26.8% -8.47%; 8.02%** [0.569] -7.65%; 9.11%** [0.455] Key secondary endpoint: Polyp detection rate, colon ascendens 23.3% 18.6% 16.2% -1.41%; 15.47%** [0.024] -6.12%; 10.82%** [0.268] Key secondary endpoint: Polyp detection rate, overall colon 44.0% 45.1% 44.5% -8.85%; 8.00%** [0.579] –7.78%; 9.09%** [0.478] Compliance rates (min 75% of both doses taken) [n] 235 (85.5%) 233 (84.7%) 245 (90.1%) n.a. n.a. SAFETY Safety set, n1⁄4 262 Safety set, n1⁄4 269 Safety set, n1⁄4 263 All treatment-emergent adverse events [n] 77 89 53 n.a. n.a. Patients with any related treatment-emergent adverse event [n] 30 (11.5%) 40 (14.9%) 20 (7.6%) n.a. n.a. *1⁄4 97.5% 1-sided CI; **1⁄4 95% 2-sided CI; n.a.1⁄4 not applicable. United European Gastroenterology Journal 4(5S) A219",
"title": ""
},
{
"docid": "4507c71798a856be64381d7098f30bf4",
"text": "Adversarial examples are intentionally crafted data with the purpose of deceiving neural networks into misclassification. When we talk about strategies to create such examples, we usually refer to perturbation-based methods that fabricate adversarial examples by applying invisible perturbations onto normal data. The resulting data reserve their visual appearance to human observers, yet can be totally unrecognizable to DNN models, which in turn leads to completely misleading predictions. In this paper, however, we consider crafting adversarial examples from existing data as a limitation to example diversity. We propose a non-perturbationbased framework that generates native adversarial examples from class-conditional generative adversarial networks. As such, the generated data will not resemble any existing data and thus expand example diversity, raising the difficulty in adversarial defense. We then extend this framework to pre-trained conditional GANs, in which we turn an existing generator into an \"adversarial-example generator\". We conduct experiments on our approach for MNIST and CIFAR10 datasets and have satisfactory results, showing that this approach can be a potential alternative to previous attack strategies.",
"title": ""
},
{
"docid": "6e52471655da243e278f121cd1b12596",
"text": "Finite element method (FEM) is a powerful tool in analysis of electrical machines however, the computational cost is high depending on the geometry of analyzed machine. In synchronous reluctance machines (SyRM) with transversally laminated rotors, the anisotropy of magnetic circuit is provided by flux barriers which can be of various shapes. Flux barriers of shape based on Zhukovski's curves seem to provide very good electromagnetic properties of the machine. Complex geometry requires a fine mesh which increases computational cost when performing finite element analysis. By using magnetic equivalent circuit (MEC) it is possible to obtain good accuracy at low cost. This paper presents magnetic equivalent circuit of SyRM with new type of flux barriers. Numerical calculation of flux barriers' reluctances will be also presented.",
"title": ""
},
{
"docid": "4aec1d1c4f4ca3990836a5d15fba81c7",
"text": "P eople with higher cognitive ability (or “IQ”) differ from those with lower cognitive ability in a variety of important and unimportant ways. On average, they live longer, earn more, have larger working memories, faster reaction times and are more susceptible to visual illusions (Jensen, 1998). Despite the diversity of phenomena related to IQ, few have attempted to understand—or even describe—its influences on judgment and decision making. Studies on time preference, risk preference, probability weighting, ambiguity aversion, endowment effects, anchoring and other widely researched topics rarely make any reference to the possible effects of cognitive abilities (or cognitive traits). Decision researchers may neglect cognitive ability because they are more interested in the average effect of some experimental manipulation. On this view, individual differences (in intelligence or anything else) are regarded as a nuisance—as just another source of “unexplained” variance. Second, most studies are conducted on college undergraduates, who are widely perceived as fairly homogenous. Third, characterizing performance differences on cognitive tasks requires terms (“IQ” and “aptitudes” and such) that many object to because of their association with discriminatory policies. In short, researchers may be reluctant to study something they do not find interesting, that is not perceived to vary much within the subject pool conveniently obtained, and that will just get them into trouble anyway. But as Lubinski and Humphreys (1997) note, a neglected aspect does not cease to operate because it is neglected, and there is no good reason for ignoring the possibility that general intelligence or various more specific cognitive abilities are important causal determinants of decision making. To provoke interest in this",
"title": ""
},
{
"docid": "0bf150f6cd566c31ec840a57d8d2fa55",
"text": "Within the past few years, organizations in diverse industries have adopted MapReduce-based systems for large-scale data processing. Along with these new users, important new workloads have emerged which feature many small, short, and increasingly interactive jobs in addition to the large, long-running batch jobs for which MapReduce was originally designed. As interactive, large-scale query processing is a strength of the RDBMS community, it is important that lessons from that field be carried over and applied where possible in this new domain. However, these new workloads have not yet been described in the literature. We fill this gap with an empirical analysis of MapReduce traces from six separate business-critical deployments inside Facebook and at Cloudera customers in e-commerce, telecommunications, media, and retail. Our key contribution is a characterization of new MapReduce workloads which are driven in part by interactive analysis, and which make heavy use of querylike programming frameworks on top of MapReduce. These workloads display diverse behaviors which invalidate prior assumptions about MapReduce such as uniform data access, regular diurnal patterns, and prevalence of large jobs. A secondary contribution is a first step towards creating a TPC-like data processing benchmark for MapReduce.",
"title": ""
},
{
"docid": "16118317af9ae39ee95765616c5506ed",
"text": "Generative Adversarial Networks (GANs) are shown to be successful at generating new and realistic samples including 3D object models. Conditional GAN, a variant of GANs, allows generating samples in given conditions. However, objects generated for each condition are different and it does not allow generation of the same object in different conditions. In this paper, we first adapt conditional GAN, which is originally designed for 2D image generation, to the problem of generating 3D models in different rotations. We then propose a new approach to guide the network to generate the same 3D sample in different and controllable rotation angles (sample pairs). Unlike previous studies, the proposed method does not require modification of the standard conditional GAN architecture and it can be integrated into the training step of any conditional GAN. Experimental results and visual comparison of 3D models show that the proposed method is successful at generating model pairs in different conditions.",
"title": ""
},
{
"docid": "52722e0d7a11f2deccf5dec893a8febb",
"text": "With more than 340~million messages that are posted on Twitter every day, the amount of duplicate content as well as the demand for appropriate duplicate detection mechanisms is increasing tremendously. Yet there exists little research that aims at detecting near-duplicate content on microblogging platforms. We investigate the problem of near-duplicate detection on Twitter and introduce a framework that analyzes the tweets by comparing (i) syntactical characteristics, (ii) semantic similarity, and (iii) contextual information. Our framework provides different duplicate detection strategies that, among others, make use of external Web resources which are referenced from microposts. Machine learning is exploited in order to learn patterns that help identifying duplicate content. We put our duplicate detection framework into practice by integrating it into Twinder, a search engine for Twitter streams. An in-depth analysis shows that it allows Twinder to diversify search results and improve the quality of Twitter search. We conduct extensive experiments in which we (1) evaluate the quality of different strategies for detecting duplicates, (2) analyze the impact of various features on duplicate detection, (3) investigate the quality of strategies that classify to what exact level two microposts can be considered as duplicates and (4) optimize the process of identifying duplicate content on Twitter. Our results prove that semantic features which are extracted by our framework can boost the performance of detecting duplicates.",
"title": ""
},
{
"docid": "f1a4874767c7b4e0c45a97e516b885d0",
"text": "It is proposed to use weighted least-norm solution to avoid joint limits for redundant joint manipulators. A comparison is made with the gradient projection method for avoiding joint limits. While the gradient projection method provides the optimal direction for the joint velocity vector within the null space, its magnitude is not unique and is adjusted by a scalar coefficient chosen by trial and error. It is shown in this paper that one fixed value of the scalar coefficient is not suitable even in a small workspace. The proposed manipulation scheme automatically chooses an appropriate magnitude of the self-motion throughout the workspace. This scheme, unlike the gradient projection method, guarantees joint limit avoidance, and also minimizes unnecessary self-motion. It was implemented and tested for real-time control of a seven-degree-offreedom (7-DOF) Robotics Research Corporation (RRC) manipulator.",
"title": ""
},
{
"docid": "b5fe13becf36cdc699a083b732dc5d6a",
"text": "The stability of two-dimensional, linear, discrete systems is examined using the 2-D matrix Lyapunov equation. While the existence of a positive definite solution pair to the 2-D Lyapunov equation is sufficient for stability, the paper proves that such existence is not necessary for stability, disproving a long-standing conjecture.",
"title": ""
},
{
"docid": "c7e5a93ecc6714ffbb39809fb64b440c",
"text": "This study investigated the role of self-directed learning (SDL) in problem-based learning (PBL) and examined how SDL relates to self-regulated learning (SRL). First, it is explained how SDL is implemented in PBL environments. Similarities between SDL and SRL are highlighted. However, both concepts differ on important aspects. SDL includes an additional premise of giving students a broader role in the selection and evaluation of learning materials. SDL can encompass SRL, but the opposite does not hold. Further, a review of empirical studies on SDL and SRL in PBL was conducted. Results suggested that SDL and SRL are developmental processes, that the “self” aspect is crucial, and that PBL can foster SDL. It is concluded that conceptual clarity of what SDL entails and guidance for both teachers and students can help PBL to bring forth self-directed learners.",
"title": ""
},
{
"docid": "b12b500f7c6ac3166eb4fbdd789196ea",
"text": "Theory of Mind (ToM) is the ability to attribute thoughts, intentions and beliefs to others. This involves component processes, including cognitive perspective taking (cognitive ToM) and understanding emotions (affective ToM). This study assessed the distinction and overlap of neural processes involved in these respective components, and also investigated their development between adolescence and adulthood. While data suggest that ToM develops between adolescence and adulthood, these populations have not been compared on cognitive and affective ToM domains. Using fMRI with 15 adolescent (aged 11-16 years) and 15 adult (aged 24-40 years) males, we assessed neural responses during cartoon vignettes requiring cognitive ToM, affective ToM or physical causality comprehension (control). An additional aim was to explore relationships between fMRI data and self-reported empathy. Both cognitive and affective ToM conditions were associated with neural responses in the classic ToM network across both groups, although only affective ToM recruited medial/ventromedial PFC (mPFC/vmPFC). Adolescents additionally activated vmPFC more than did adults during affective ToM. The specificity of the mPFC/vmPFC response during affective ToM supports evidence from lesion studies suggesting that vmPFC may integrate affective information during ToM. Furthermore, the differential neural response in vmPFC between adult and adolescent groups indicates developmental changes in affective ToM processing.",
"title": ""
}
] | scidocsrr |
d99d907ffd9190cff50689e768857791 | Disease Prediction from Electronic Health Records Using Generative Adversarial Networks | [
{
"docid": "897a6d208785b144b5d59e4f346134cd",
"text": "Secondary use of electronic health records (EHRs) promises to advance clinical research and better inform clinical decision making. Challenges in summarizing and representing patient data prevent widespread practice of predictive modeling using EHRs. Here we present a novel unsupervised deep feature learning method to derive a general-purpose patient representation from EHR data that facilitates clinical predictive modeling. In particular, a three-layer stack of denoising autoencoders was used to capture hierarchical regularities and dependencies in the aggregated EHRs of about 700,000 patients from the Mount Sinai data warehouse. The result is a representation we name \"deep patient\". We evaluated this representation as broadly predictive of health states by assessing the probability of patients to develop various diseases. We performed evaluation using 76,214 test patients comprising 78 diseases from diverse clinical domains and temporal windows. Our results significantly outperformed those achieved using representations based on raw EHR data and alternative feature learning strategies. Prediction performance for severe diabetes, schizophrenia, and various cancers were among the top performing. These findings indicate that deep learning applied to EHRs can derive patient representations that offer improved clinical predictions, and could provide a machine learning framework for augmenting clinical decision systems.",
"title": ""
},
{
"docid": "ca331150e60e24f038f9c440b8125ddc",
"text": "Class imbalance is one of the challenges of machine learning and data mining fields. Imbalance data sets degrades the performance of data mining and machine learning techniques as the overall accuracy and decision making be biased to the majority class, which lead to misclassifying the minority class samples or furthermore treated them as noise. This paper proposes a general survey for class imbalance problem solutions and the most significant investigations recently introduced by researchers.",
"title": ""
}
] | [
{
"docid": "da43061319adbfd41c77483590a3c819",
"text": "Sleep bruxism (SB) is reported by 8% of the adult population and is mainly associated with rhythmic masticatory muscle activity (RMMA) characterized by repetitive jaw muscle contractions (3 bursts or more at a frequency of 1 Hz). The consequences of SB may include tooth destruction, jaw pain, headaches, or the limitation of mandibular movement, as well as tooth-grinding sounds that disrupt the sleep of bed partners. SB is probably an extreme manifestation of a masticatory muscle activity occurring during the sleep of most normal subjects, since RMMA is observed in 60% of normal sleepers in the absence of grinding sounds. The pathophysiology of SB is becoming clearer, and there is an abundance of evidence outlining the neurophysiology and neurochemistry of rhythmic jaw movements (RJM) in relation to chewing, swallowing, and breathing. The sleep literature provides much evidence describing the mechanisms involved in the reduction of muscle tone, from sleep onset to the atonia that characterizes rapid eye movement (REM) sleep. Several brainstem structures (e.g., reticular pontis oralis, pontis caudalis, parvocellularis) and neurochemicals (e.g., serotonin, dopamine, gamma aminobutyric acid [GABA], noradrenaline) are involved in both the genesis of RJM and the modulation of muscle tone during sleep. It remains unknown why a high percentage of normal subjects present RMMA during sleep and why this activity is three times more frequent and higher in amplitude in SB patients. It is also unclear why RMMA during sleep is characterized by co-activation of both jaw-opening and jaw-closing muscles instead of the alternating jaw-opening and jaw-closing muscle activity pattern typical of chewing. The final section of this review proposes that RMMA during sleep has a role in lubricating the upper alimentary tract and increasing airway patency. The review concludes with an outline of questions for future research.",
"title": ""
},
{
"docid": "24c1b31bac3688c901c9b56ef9a331da",
"text": "Advanced Persistent Threats (APTs) are a new breed of internet based smart threats, which can go undetected with the existing state of-the-art internet traffic monitoring and protection systems. With the evolution of internet and cloud computing, a new generation of smart APT attacks has also evolved and signature based threat detection systems are proving to be futile and insufficient. One of the essential strategies in detecting APTs is to continuously monitor and analyze various features of a TCP/IP connection, such as the number of transferred packets, the total count of the bytes exchanged, the duration of the TCP/IP connections, and details of the number of packet flows. The current threat detection approaches make extensive use of machine learning algorithms that utilize statistical and behavioral knowledge of the traffic. However, the performance of these algorithms is far from satisfactory in terms of reducing false negatives and false positives simultaneously. Mostly, current algorithms focus on reducing false positives, only. This paper presents a fractal based anomaly classification mechanism, with the goal of reducing both false positives and false negatives, simultaneously. A comparison of the proposed fractal based method with a traditional Euclidean based machine learning algorithm (k-NN) shows that the proposed method significantly outperforms the traditional approach by reducing false positive and false negative rates, simultaneously, while improving the overall classification rates.",
"title": ""
},
{
"docid": "59759e16adfbb3b08cf9a8deb8352b6e",
"text": "Media images of the female body commonly represent reigning appearance ideals of the era in which they are published. To date, limited documentation of the genital appearance ideals in mainstream media exists. Analysis 1 sought to describe genital appearance ideals (i.e., mons pubis and labia majora visibility, labia minora size and color, and pubic hair style) and general physique ideals (i.e., hip, waist, and bust size, height, weight, and body mass index [BMI]) across time based on 647 Playboy Magazine centerfolds published between 1953 and 2007. Analysis 2 focused exclusively on the genital appearance ideals embodied by models in 185 Playboy photographs published between 2007 and 2008. Taken together, results suggest the perpetuation of a \"Barbie Doll\" ideal characterized by a low BMI, narrow hips, a prominent bust, and hairless, undefined genitalia resembling those of a prepubescent female.",
"title": ""
},
{
"docid": "c56eac3f4ee971beb833d25d95ff2f10",
"text": "Automatic Number Plate Recognition (ANPR) is a real time embedded system which automatically recognizes the license number of vehicles. In this paper, the task of recognizing number plate for Indian conditions is considered, where number plate standards are rarely followed.",
"title": ""
},
{
"docid": "6005ebbe5848655fda5127f555f70764",
"text": "The ability to record and replay program execution helps significantly in debugging non-deterministic MPI applications by reproducing message-receive orders. However, the large amount of data that traditional record-and-reply techniques record precludes its practical applicability to massively parallel applications. In this paper, we propose a new compression algorithm, Clock Delta Compression (CDC), for scalable record and replay of non-deterministic MPI applications. CDC defines a reference order of message receives based on a totally ordered relation using Lamport clocks, and only records the differences between this reference logical-clock order and an observed order. Our evaluation shows that CDC significantly reduces the record data size. For example, when we apply CDC to Monte Carlo particle transport Benchmark (MCB), which represents common non-deterministic communication patterns, CDC reduces the record size by approximately two orders of magnitude compared to traditional techniques and incurs between 13.1% and 25.5% of runtime overhead.",
"title": ""
},
{
"docid": "6adbe9f2de5a070cf9c1b7f708f4a452",
"text": "Prior research has provided valuable insights into how and why employees make a decision about the adoption and use of information technologies (ITs) in the workplace. From an organizational point of view, however, the more important issue is how managers make informed decisions about interventions that can lead to greater acceptance and effective utilization of IT. There is limited research in the IT implementation literature that deals with the role of interventions to aid such managerial decision making. Particularly, there is a need to understand how various interventions can influence the known determinants of IT adoption and use. To address this gap in the literature, we draw from the vast body of research on the technology acceptance model (TAM), particularly the work on the determinants of perceived usefulness and perceived ease of use, and: (i) develop a comprehensive nomological network (integrated model) of the determinants of individual level (IT) adoption and use; (ii) empirically test the proposed integrated model; and (iii) present a research agenda focused on potential preand postimplementation interventions that can enhance employees’ adoption and use of IT. Our findings and research agenda have important implications for managerial decision making on IT implementation in organizations. Subject Areas: Design Characteristics, Interventions, Management Support, Organizational Support, Peer Support, Technology Acceptance Model (TAM), Technology Adoption, Training, User Acceptance, User Involvement, and User Participation.",
"title": ""
},
{
"docid": "dd1e7bb3ba33c5ea711c0d066db53fa9",
"text": "This paper presents the development and test of a flexible control strategy for an 11-kW wind turbine with a back-to-back power converter capable of working in both stand-alone and grid-connection mode. The stand-alone control is featured with a complex output voltage controller capable of handling nonlinear load and excess or deficit of generated power. Grid-connection mode with current control is also enabled for the case of isolated local grid involving other dispersed power generators such as other wind turbines or diesel generators. A novel automatic mode switch method based on a phase-locked loop controller is developed in order to detect the grid failure or recovery and switch the operation mode accordingly. A flexible digital signal processor (DSP) system that allows user-friendly code development and online tuning is used to implement and test the different control strategies. The back-to-back power conversion configuration is chosen where the generator converter uses a built-in standard flux vector control to control the speed of the turbine shaft while the grid-side converter uses a standard pulse-width modulation active rectifier control strategy implemented in a DSP controller. The design of the longitudinal conversion loss filter and of the involved PI-controllers are described in detail. Test results show the proposed methods works properly.",
"title": ""
},
{
"docid": "77a36de6a2bae1a0c2a6e2aa8b097d7b",
"text": "We present a palette-based framework for color composition for visual applications. Color composition is a critical aspect of visual applications in art, design, and visualization. The color wheel is often used to explain pleasing color combinations in geometric terms, and, in digital design, to provide a user interface to visualize and manipulate colors. We abstract relationships between palette colors as a compact set of axes describing harmonic templates over perceptually uniform color wheels. Our framework provides a basis for a variety of color-aware image operations, such as color harmonization and color transfer, and can be applied to videos. To enable our approach, we introduce an extremely scalable and efficient yet simple palette-based image decomposition algorithm. Our approach is based on the geometry of images in RGBXY-space. This new geometric approach is orders of magnitude more efficient than previous work and requires no numerical optimization. We demonstrate a real-time layer decomposition tool. After preprocessing, our algorithm can decompose 6 MP images into layers in 20 milliseconds. We also conducted three large-scale, wide-ranging perceptual studies on the perception of harmonic colors and harmonization algorithms.",
"title": ""
},
{
"docid": "2ffb0a4ceb5c049b480001245ba61f21",
"text": "Topic models, such as latent Dirichlet allocation (LDA), can be useful tools for the statistical analysis of document collections and other discrete data. The LDA model assumes that the words of each document arise from a mixture of topics, each of which is a distribution over the vocabulary. A limitation of LDA is the inability to model topic correlation even though, for example, a document about genetics is more likely to also be about disease than X-ray astronomy. This limitation stems from the use of the Dirichlet distribution to model the variability among the topic proportions. In this paper we develop the correlated topic model (CTM), where the topic proportions exhibit correlation via the logistic normal distribution [J. Roy. Statist. Soc. Ser. B 44 (1982) 139–177]. We derive a fast variational inference algorithm for approximate posterior inference in this model, which is complicated by the fact that the logistic normal is not conjugate to the multinomial. We apply the CTM to the articles from Science published from 1990–1999, a data set that comprises 57M words. The CTM gives a better fit of the data than LDA, and we demonstrate its use as an exploratory tool of large document collections.",
"title": ""
},
{
"docid": "a9d1cdfd844a7347d255838d5eb74b03",
"text": "An economy based on the exchange of capital, assets and services between individuals has grown significantly, spurred by proliferation of internet-based platforms that allow people to share underutilized resources and trade with reasonably low transaction costs. The movement toward this economy of “sharing” translates into market efficiencies that bear new products, reframe established services, have positive environmental effects, and may generate overall economic growth. This emerging paradigm, entitled the collaborative economy, is disruptive to the conventional company-driven economic paradigm as evidenced by the large number of peer-to-peer based services that have captured impressive market shares sectors ranging from transportation and hospitality to banking and risk capital. The panel explores economic, social, and technological implications of the collaborative economy, how digital technologies enable it, and how the massive sociotechnical systems embodied in these new peer platforms may evolve in response to the market and social forces that drive this emerging ecosystem.",
"title": ""
},
{
"docid": "d19eceb87e0ebb03284c867efe709060",
"text": "Vehicular Ad hoc Networks (VANETs) are the promising approach to provide safety and other applications to the drivers as well as passengers. It becomes a key component of the intelligent transport system. A lot of works have been done towards it but security in VANET got less attention. In this article, we have discussed about the VANET and its technical and security challenges. We have also discussed some major attacks and solutions that can be implemented against these attacks. We have compared the solution using different parameters. Lastly we have discussed the mechanisms that are used in the solutions.",
"title": ""
},
{
"docid": "f7d06c6f2313417fd2795ce4c4402f0e",
"text": "Decades of research suggest that similarity in demographics, values, activities, and attitudes predicts higher marital satisfaction. The present study examined the relationship between similarity in Big Five personality factors and initial levels and 12-year trajectories of marital satisfaction in long-term couples, who were in their 40s and 60s at the beginning of the study. Across the entire sample, greater overall personality similarity predicted more negative slopes in marital satisfaction trajectories. In addition, spousal similarity on Conscientiousness and Extraversion more strongly predicted negative marital satisfaction outcomes among the midlife sample than among the older sample. Results are discussed in terms of the different life tasks faced by young, midlife, and older adults, and the implications of these tasks for the \"ingredients\" of marital satisfaction.",
"title": ""
},
{
"docid": "8f3eaf1a65cd3d81e718143304e4ce81",
"text": "Issue tracking systems store valuable data for testing hypotheses concerning maintenance, building statistical prediction models and recently investigating developers \"affectiveness\". In particular, the Jira Issue Tracking System is a proprietary tracking system that has gained a tremendous popularity in the last years and offers unique features like the project management system and the Jira agile kanban board. This paper presents a dataset extracted from the Jira ITS of four popular open source ecosystems (as well as the tools and infrastructure used for extraction) the Apache Software Foundation, Spring, JBoss and CodeHaus communities. Our dataset hosts more than 1K projects, containing more than 700K issue reports and more than 2 million issue comments. Using this data, we have been able to deeply study the communication process among developers, and how this aspect affects the development process. Furthermore, comments posted by developers contain not only technical information, but also valuable information about sentiments and emotions. Since sentiment analysis and human aspects in software engineering are gaining more and more importance in the last years, with this repository we would like to encourage further studies in this direction.",
"title": ""
},
{
"docid": "532ded1b0cc25a21464996a15a976125",
"text": "Folded-plate structures provide an efficient design using thin laminated veneer lumber panels. Inspired by Japanese furniture joinery, the multiple tab-and-slot joint was developed for the multi-assembly of timber panels with non-parallel edges without adhesive or metal joints. Because the global analysis of our origami structures reveals that the rotational stiffness at ridges affects the global behaviour, we propose an experimental and numerical study of this linear interlocking connection. Its geometry is governed by three angles that orient the contact faces. Nine combinations of these angles were tested and the rotational slip was measured with two different bending set-ups: closing or opening the fold formed by two panels. The non-linear behaviour was conjointly reproduced numerically using the finite element method and continuum damage mechanics.",
"title": ""
},
{
"docid": "c6a429e06f634e1dee995d0537777b4b",
"text": "Digital image editing is usually an iterative process; users repetitively perform short sequences of operations, as well as undo and redo using history navigation tools. In our collected data, undo, redo and navigation constitute about 9 percent of the total commands and consume a significant amount of user time. Unfortunately, such activities also tend to be tedious and frustrating, especially for complex projects.\n We address this crucial issue by adaptive history, a UI mechanism that groups relevant operations together to reduce user workloads. Such grouping can occur at various history granularities. We present two that have been found to be most useful. On a fine level, we group repeating commands patterns together to facilitate smart undo. On a coarse level, we segment commands history into chunks for semantic navigation. The main advantages of our approach are that it is intuitive to use and easy to integrate into any existing tools with text-based history lists. Unlike prior methods that are predominately rule based, our approach is data driven, and thus adapts better to common editing tasks which exhibit sufficient diversity and complexity that may defy predetermined rules or procedures.\n A user study showed that our system performs quantitatively better than two other baselines, and the participants also gave positive qualitative feedbacks on the system features.",
"title": ""
},
{
"docid": "cd7fa5de19b12bdded98f197c1d9cd22",
"text": "Many event monitoring systems rely on counting known keywords in streaming text data to detect sudden spikes in frequency. But the dynamic and conversational nature of Twitter makes it hard to select known keywords for monitoring. Here we consider a method of automatically finding noun phrases (NPs) as keywords for event monitoring in Twitter. Finding NPs has two aspects, identifying the boundaries for the subsequence of words which represent the NP, and classifying the NP to a specific broad category such as politics, sports, etc. To classify an NP, we define the feature vector for the NP using not just the words but also the author's behavior and social activities. Our results show that we can classify many NPs by using a sample of training data from a knowledge-base.",
"title": ""
},
{
"docid": "336d91ba4c688350f308982f8b09dd4b",
"text": "osting by E Abstract Extraction–transformation–loading (ETL) tools are pieces of software responsible for the extraction of data from several sources, its cleansing, customization, reformatting, integration, and insertion into a data warehouse. Building the ETL process is potentially one of the biggest tasks of building a warehouse; it is complex, time consuming, and consume most of data warehouse project’s implementation efforts, costs, and resources. Building a data warehouse requires focusing closely on understanding three main areas: the source area, the destination area, and the mapping area (ETL processes). The source area has standard models such as entity relationship diagram, and the destination area has standard models such as star schema, but the mapping area has not a standard model till now. In spite of the importance of ETL processes, little research has been done in this area due to its complexity. There is a clear lack of a standard model that can be used to represent the ETL scenarios. In this paper we will try to navigate through the efforts done to conceptualize",
"title": ""
},
{
"docid": "fe9724a94d1aa13e4fbefa7c88ac09dd",
"text": "We demonstrate a multimodal dialogue system using reinforcement learning for in-car scenarios, developed at Edinburgh University and Cambridge University for the TALK project1. This prototype is the first “Information State Update” (ISU) dialogue system to exhibit reinforcement learning of dialogue strategies, and also has a fragmentary clarification feature. This paper describes the main components and functionality of the system, as well as the purposes and future use of the system, and surveys the research issues involved in its construction. Evaluation of this system (i.e. comparing the baseline system with handcoded vs. learnt dialogue policies) is ongoing, and the demonstration will show both.",
"title": ""
},
{
"docid": "f1e646a0627a5c61a0f73a41d35ccac7",
"text": "Smart cities play an increasingly important role for the sustainable economic development of a determined area. Smart cities are considered a key element for generating wealth, knowledge and diversity, both economically and socially. A Smart City is the engine to reach the sustainability of its infrastructure and facilitate the sustainable development of its industry, buildings and citizens. The first goal to reach that sustainability is reduce the energy consumption and the levels of greenhouse gases (GHG). For that purpose, it is required scalability, extensibility and integration of new resources in order to reach a higher awareness about the energy consumption, distribution and generation, which allows a suitable modeling which can enable new countermeasure and action plans to mitigate the current excessive power consumption effects. Smart Cities should offer efficient support for global communications and access to the services and information. It is required to enable a homogenous and seamless machine to machine (M2M) communication in the different solutions and use cases. This work presents how to reach an interoperable Smart Lighting solution over the emerging M2M protocols such as CoAP built over REST architecture. This follows up the guidelines defined by the IP for Smart Objects Alliance (IPSO Alliance) in order to implement and interoperable semantic level for the street lighting, and describes the integration of the communications and logic over the existing street lighting infrastructure.",
"title": ""
}
] | scidocsrr |
c5f63d9c38b752c288cf04ab7a471093 | Non-Negative Matrix Factorization Revisited: Uniqueness and Algorithm for Symmetric Decomposition | [
{
"docid": "9c949a86346bda32a73f986651ab8067",
"text": "Nonnegative matrix factorization (NMF) and its extensions such as Nonnegative Tensor Factorization (NTF) have b ecome prominent techniques for blind sources separation (BSS), analys is of image databases, data mining and other information retrieval and clustering applications. In this paper we propose a family of e fficient algorithms for NMF/NTF, as well as sparse nonnegative coding and representatio n, that has many potential applications in computational neur oscience, multisensory processing, compressed sensing and multidimensio nal data analysis. We have developed a class of optimized local algorithm s which are referred to as Hierarchical Alternating Least Squares (HAL S) algorithms. For these purposes, we have performed sequential constrain ed minimization on a set of squared Euclidean distances. We then extend t his approach to robust cost functions using the Alpha and Beta divergence s and derive flexible update rules. Our algorithms are locally stable and work well for NMF-based blind source separation (BSS) not only for the ove r-determined case but also for an under-determined (over-complete) case (i.e., for a system which has less sensors than sources) if data are su fficiently sparse. The NMF learning rules are extended and generalized for N-th order nonnegative tensor factorization (NTF). Moreover, these algorit hms can be tuned to different noise statistics by adjusting a single parameter. Ext ensive experimental results confirm the accuracy and computational p erformance of the developed algorithms, especially, with usage of multilayer hierarchical NMF approach [3]. key words: Nonnegative matrix factorization (NMF), nonnegative tensor factorizations (NTF), nonnegative PARAFAC, model reduction, feature extraction, compression, denoising, multiplicative local learning (adaptive) algorithms, Alpha and Beta divergences.",
"title": ""
}
] | [
{
"docid": "0960aa1abdac4254b84912b14d653ba9",
"text": "Latent Dirichlet Allocation (LDA) mining thematic structure of documents plays an important role in nature language processing and machine learning areas. However, the probability distribution from LDA only describes the statistical relationship of occurrences in the corpus and usually in practice, probability is not the best choice for feature representations. Recently, embedding methods have been proposed to represent words and documents by learning essential concepts and representations, such as Word2Vec and Doc2Vec. The embedded representations have shown more effectiveness than LDA-style representations in many tasks. In this paper, we propose the Topic2Vec approach which can learn topic representations in the same semantic vector space with words, as an alternative to probability distribution. The experimental results show that Topic2Vec achieves interesting and meaningful results.",
"title": ""
},
{
"docid": "2a4439b4368af6317b14d6de03b27e44",
"text": "We introduce an algorithm for tracking deformable objects from a sequence of point clouds. The proposed tracking algorithm is based on a probabilistic generative model that incorporates observations of the point cloud and the physical properties of the tracked object and its environment. We propose a modified expectation maximization algorithm to perform maximum a posteriori estimation to update the state estimate at each time step. Our modification makes it practical to perform the inference through calls to a physics simulation engine. This is significant because (i) it allows for the use of highly optimized physics simulation engines for the core computations of our tracking algorithm, and (ii) it makes it possible to naturally, and efficiently, account for physical constraints imposed by collisions, grasping actions, and material properties in the observation updates. Even in the presence of the relatively large occlusions that occur during manipulation tasks, our algorithm is able to robustly track a variety of types of deformable objects, including ones that are one-dimensional, such as ropes; two-dimensional, such as cloth; and three-dimensional, such as sponges. Our implementation can track these objects in real time.",
"title": ""
},
{
"docid": "a25041f4b95b68d2b8b9356d2f383b69",
"text": "The authors review evidence that self-control may consume a limited resource. Exerting self-control may consume self-control strength, reducing the amount of strength available for subsequent self-control efforts. Coping with stress, regulating negative affect, and resisting temptations require self-control, and after such self-control efforts, subsequent attempts at self-control are more likely to fail. Continuous self-control efforts, such as vigilance, also degrade over time. These decrements in self-control are probably not due to negative moods or learned helplessness produced by the initial self-control attempt. These decrements appear to be specific to behaviors that involve self-control; behaviors that do not require self-control neither consume nor require self-control strength. It is concluded that the executive component of the self--in particular, inhibition--relies on a limited, consumable resource.",
"title": ""
},
{
"docid": "ab83fb07e4f9f70a3e4f22620ba551fc",
"text": "OBJECTIVES:Biliary cannulation is frequently the most difficult component of endoscopic retrograde cholangiopancreatography (ERCP). Techniques employed to improve safety and efficacy include wire-guided access and the use of sphincterotomes. However, a variety of options for these techniques are available and optimum strategies are not defined. We assessed whether the use of endoscopist- vs. assistant-controlled wire guidance and small vs. standard-diameter sphincterotomes improves safety and/or efficacy of bile duct cannulation.METHODS:Patients were randomized using a 2 × 2 factorial design to initial cannulation attempt with endoscopist- vs. assistant-controlled wire systems (1:1 ratio) and small (3.9Fr tip) vs. standard (4.4Fr tip) sphincterotomes (1:1 ratio). The primary efficacy outcome was successful deep bile duct cannulation within 8 attempts. Sample size of 498 was planned to demonstrate a significant increase in cannulation of 10%. Interim analysis was planned after 200 patients–with a stopping rule pre-defined for a significant difference in the composite safety end point (pancreatitis, cholangitis, bleeding, and perforation).RESULTS:The study was stopped after the interim analysis, with 216 patients randomized, due to a significant difference in the safety end point with endoscopist- vs. assistant-controlled wire guidance (3/109 (2.8%) vs. 12/107 (11.2%), P=0.016), primarily due to a lower rate of post-ERCP pancreatitis (3/109 (2.8%) vs. 10/107 (9.3%), P=0.049). The difference in successful biliary cannulation for endoscopist- vs. assistant-controlled wire guidance was −0.5% (95% CI−12.0 to 11.1%) and for small vs. standard sphincerotome −0.9% (95% CI–12.5 to 10.6%).CONCLUSIONS:Use of the endoscopist- rather than assistant-controlled wire guidance for bile duct cannulation reduces complications of ERCP such as pancreatitis.",
"title": ""
},
{
"docid": "b3e1bdd7cfca17782bde698297e191ab",
"text": "Synthetic aperture radar (SAR) raw signal simulation is a powerful tool for designing new sensors, testing processing algorithms, planning missions, and devising inversion algorithms. In this paper, a spotlight SAR raw signal simulator for distributed targets is presented. The proposed procedure is based on a Fourier domain analysis: a proper analytical reformulation of the spotlight SAR raw signal expression is presented. It is shown that this reformulation allows us to design a very efficient simulation scheme that employs fast Fourier transform codes. Accordingly, the computational load is dramatically reduced with respect to a time-domain simulation and this, for the first time, makes spotlight simulation of extended scenes feasible.",
"title": ""
},
{
"docid": "155c9444bfdb61352eddd7140ae75125",
"text": "To the best of our knowledge, we present the first hardware implementation of isogeny-based cryptography available in the literature. Particularly, we present the first implementation of the supersingular isogeny Diffie-Hellman (SIDH) key exchange, which features quantum-resistance. We optimize this design for speed by creating a high throughput multiplier unit, taking advantage of parallelization of arithmetic in $\\mathbb {F}_{p^{2}}$ , and minimizing pipeline stalls with optimal scheduling. Consequently, our results are also faster than software libraries running affine SIDH even on Intel Haswell processors. For our implementation at 85-bit quantum security and 128-bit classical security, we generate ephemeral public keys in 1.655 million cycles for Alice and 1.490 million cycles for Bob. We generate the shared secret in an additional 1.510 million cycles for Alice and 1.312 million cycles for Bob. On a Virtex-7, these results are approximately 1.5 times faster than known software implementations running the same 512-bit SIDH. Our results and observations show that the isogeny-based schemes can be implemented with high efficiency on reconfigurable hardware.",
"title": ""
},
{
"docid": "7f9a565c10fdee58cbe76b7e9351f037",
"text": "The effects of iron substitution on the structural and magnetic properties of the GdCo(12-x)Fe(x)B6 (0 ≤ x ≤ 3) series of compounds have been studied. All of the compounds form in the rhombohedral SrNi12B6-type structure and exhibit ferrimagnetic behaviour below room temperature: T(C) decreases from 158 K for x = 0 to 93 K for x = 3. (155)Gd Mössbauer spectroscopy indicates that the easy magnetization axis changes from axial to basal-plane upon substitution of Fe for Co. This observation has been confirmed using neutron powder diffraction. The axial to basal-plane transition is remarkably sensitive to the Fe content and comparison with earlier (57)Fe-doping studies suggests that the boundary lies below x = 0.1.",
"title": ""
},
{
"docid": "e668f84e16a5d17dff7d638a5543af82",
"text": "Mining topics in Twitter is increasingly attracting more attention. However, the shortness and informality of tweets leads to extreme sparse vector representation with a large vocabulary, which makes the conventional topic models (e.g., Latent Dirichlet Allocation) often fail to achieve high quality underlying topics. Luckily, tweets always show up with rich user-generated hash tags as keywords. In this paper, we propose a novel topic model to handle such semi-structured tweets, denoted as Hash tag Graph based Topic Model (HGTM). By utilizing relation information between hash tags in our hash tag graph, HGTM establishes word semantic relations, even if they haven't co-occurred within a specific tweet. In addition, we enhance the dependencies of both multiple words and hash tags via latent variables (topics) modeled by HGTM. We illustrate that the user-contributed hash tags could serve as weakly-supervised information for topic modeling, and hash tag relation could reveal the semantic relation between tweets. Experiments on a real-world twitter data set show that our model provides an effective solution to discover more distinct and coherent topics than the state-of-the-art baselines and has a strong ability to control sparseness and noise in tweets.",
"title": ""
},
{
"docid": "211058f2d0d5b9cf555a6e301cd80a5d",
"text": "We present a method based on header paths for efficient and complete extraction of labeled data from tables meant for humans. Although many table configurations yield to the proposed syntactic analysis, some require access to semantic knowledge. Clicking on one or two critical cells per table, through a simple interface, is sufficient to resolve most of these problem tables. Header paths, a purely syntactic representation of visual tables, can be transformed (\"factored\") into existing representations of structured data such as category trees, relational tables, and RDF triples. From a random sample of 200 web tables from ten large statistical web sites, we generated 376 relational tables and 34,110 subject-predicate-object RDF triples.",
"title": ""
},
{
"docid": "b52bfe9169e1b68fec9ec11b76f458f9",
"text": "Copyright (©) 1999–2003 R Foundation for Statistical Computing. Permission is granted to make and distribute verbatim copies of this manual provided the copyright notice and this permission notice are preserved on all copies. Permission is granted to copy and distribute modified versions of this manual under the conditions for verbatim copying, provided that the entire resulting derived work is distributed under the terms of a permission notice identical to this one. Permission is granted to copy and distribute translations of this manual into another language, under the above conditions for modified versions, except that this permission notice may be stated in a translation approved by the R Development Core Team.",
"title": ""
},
{
"docid": "27d0d038c827884b50d1932945a29d94",
"text": "0957-4174/$ see front matter 2010 Elsevier Ltd. A doi:10.1016/j.eswa.2010.10.024 E-mail addresses: [email protected], ca Software engineering discipline contains several prediction approaches such as test effort prediction, correction cost prediction, fault prediction, reusability prediction, security prediction, effort prediction, and quality prediction. However, most of these prediction approaches are still in preliminary phase and more research should be conducted to reach robust models. Software fault prediction is the most popular research area in these prediction approaches and recently several research centers started new projects on this area. In this study, we investigated 90 software fault prediction papers published between year 1990 and year 2009 and then we categorized these papers according to the publication year. This paper surveys the software engineering literature on software fault prediction and both machine learning based and statistical based approaches are included in this survey. Papers explained in this article reflect the outline of what was published so far, but naturally this is not a complete review of all the papers published so far. This paper will help researchers to investigate the previous studies from metrics, methods, datasets, performance evaluation metrics, and experimental results perspectives in an easy and effective manner. Furthermore, current trends are introduced and discussed. 2010 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "db0c7a200d76230740e027c2966b066c",
"text": "BACKGROUND\nPromotion and provision of low-cost technologies that enable improved water, sanitation, and hygiene (WASH) practices are seen as viable solutions for reducing high rates of morbidity and mortality due to enteric illnesses in low-income countries. A number of theoretical models, explanatory frameworks, and decision-making models have emerged which attempt to guide behaviour change interventions related to WASH. The design and evaluation of such interventions would benefit from a synthesis of this body of theory informing WASH behaviour change and maintenance.\n\n\nMETHODS\nWe completed a systematic review of existing models and frameworks through a search of related articles available in PubMed and in the grey literature. Information on the organization of behavioural determinants was extracted from the references that fulfilled the selection criteria and synthesized. Results from this synthesis were combined with other relevant literature, and from feedback through concurrent formative and pilot research conducted in the context of two cluster-randomized trials on the efficacy of WASH behaviour change interventions to inform the development of a framework to guide the development and evaluation of WASH interventions: the Integrated Behavioural Model for Water, Sanitation, and Hygiene (IBM-WASH).\n\n\nRESULTS\nWe identified 15 WASH-specific theoretical models, behaviour change frameworks, or programmatic models, of which 9 addressed our review questions. Existing models under-represented the potential role of technology in influencing behavioural outcomes, focused on individual-level behavioural determinants, and had largely ignored the role of the physical and natural environment. IBM-WASH attempts to correct this by acknowledging three dimensions (Contextual Factors, Psychosocial Factors, and Technology Factors) that operate on five-levels (structural, community, household, individual, and habitual).\n\n\nCONCLUSIONS\nA number of WASH-specific models and frameworks exist, yet with some limitations. The IBM-WASH model aims to provide both a conceptual and practical tool for improving our understanding and evaluation of the multi-level multi-dimensional factors that influence water, sanitation, and hygiene practices in infrastructure-constrained settings. We outline future applications of our proposed model as well as future research priorities needed to advance our understanding of the sustained adoption of water, sanitation, and hygiene technologies and practices.",
"title": ""
},
{
"docid": "346bedcddf74d56db8b2d5e8b565efef",
"text": "Ulric Neisser (Chair) Gwyneth Boodoo Thomas J. Bouchard, Jr. A. Wade Boykin Nathan Brody Stephen J. Ceci Diane E Halpern John C. Loehlin Robert Perloff Robert J. Sternberg Susana Urbina Emory University Educational Testing Service, Princeton, New Jersey University of Minnesota, Minneapolis Howard University Wesleyan University Cornell University California State University, San Bernardino University of Texas, Austin University of Pittsburgh Yale University University of North Florida",
"title": ""
},
{
"docid": "08847edfd312791b67c34b79d362cde7",
"text": "We describe a formally well founded approach to link data and processes conceptually, based on adopting UML class diagrams to represent data, and BPMN to represent the process. The UML class diagram together with a set of additional process variables, called Artifact, form the information model of the process. All activities of the BPMN process refer to such an information model by means of OCL operation contracts. We show that the resulting semantics while abstract is fully executable. We also provide an implementation of the executor.",
"title": ""
},
{
"docid": "279302300cbdca5f8d7470532928f9bd",
"text": "The problem of feature selection is a difficult combinatorial task in Machine Learning and of high practical relevance, e.g. in bioinformatics. Genetic Algorithms (GAs) of fer a natural way to solve this problem. In this paper we present a special Genetic Algorithm, which especially take s into account the existing bounds on the generalization erro r for Support Vector Machines (SVMs). This new approach is compared to the traditional method of performing crossvalidation and to other existing algorithms for feature selection.",
"title": ""
},
{
"docid": "455a2974a8cda70c6b72819d96c867d9",
"text": "We have developed Cu-Cu/adhesives hybrid bonding technique by using collective cutting of Cu bumps and adhesives in order to achieve high density 2.5D/3D integration. It is considered that progression of high density interconnection leads to lower height of bonding electrodes, resulting in narrow gap between ICs. Therefore, it is difficult to fill in adhesive to such a narrow gap ICs after bonding. Thus, we consider that hybrid bonding of pre-applied adhesives and Cu-Cu thermocompression bonding must be advantageous, in terms of void less bonding and minimizing bonding stress by adhesives and also low electricity by Cu-Cu solid diffusion bonding. In the present study, we adapted the following process; at first adhesives were spin coated on the wafer with Cu post and then pre-baked. After that, pre-applied adhesives and Cu bumps were simultaneously cut by single crystal diamond bite. We found that both adhesives and Cu post surfaces after cutting have highly smooth surface less than 10nm, and dishing phenomena, which might be occurred in typical CMP process, could not be seen on the cut Cu post/ adhesives surfaces.",
"title": ""
},
{
"docid": "4fa43a3d0631d9cd2cdc87e9f0c97136",
"text": "Recent trends on how video games are played have pushed for the need to revise the game engine architecture. Indeed, game players are more mobile, using smartphones and tablets that lack CPU resources compared to PC and dedicated box. Two emerging solutions, cloud gaming and computing offload, would represent the next steps toward improving game player experience. By consequence, dissecting and analyzing game engines performances would help to better understand how to move to these new directions, which is so far missing in the literature. In this paper, we fill this gap by analyzing and evaluating one of the most popular game engine, namely Unity3D. First, we dissected the Unity3D architecture and modules. A benchmark was then used to evaluate the CPU and GPU performances of the different modules constituting Unity3D, for five representative games.",
"title": ""
},
{
"docid": "6bdf0850725f091fea6bcdf7961e27d0",
"text": "The aim of this review is to document the advantages of exclusive breastfeeding along with concerns which may hinder the practice of breastfeeding and focuses on the appropriateness of complementary feeding and feeding difficulties which infants encounter. Breastfeeding, as recommended by the World Health Organisation, is the most cost effective way for reducing childhood morbidity such as obesity, hypertension and gastroenteritis as well as mortality. There are several factors that either promote or act as barriers to good infant nutrition. Factors which influence breastfeeding practice in terms of initiation, exclusivity and duration are namely breast engorgement, sore nipples, milk insufficiency and availability of various infant formulas. On the other hand, introduction of complementary foods, also known as weaning, is done around 4 to 6 months and mothers usually should start with home-made nutritious food. Difficulties encountered during the weaning process are often refusal to eat followed by vomiting, colic, allergic reactions and diarrhoea. key words: Exclusive breastfeeding, Weaning, Complementary feeding, Feeding difficulties.",
"title": ""
},
{
"docid": "6c58c147bef99a2408859bdfa63da3a7",
"text": "We propose randomized least-squares value iteration (RLSVI) – a new reinforcement learning algorithm designed to explore and generalize efficiently via linearly parameterized value functions. We explain why versions of least-squares value iteration that use Boltzmann or -greedy exploration can be highly inefficient, and we present computational results that demonstrate dramatic efficiency gains enjoyed by RLSVI. Further, we establish an upper bound on the expected regret of RLSVI that demonstrates nearoptimality in a tabula rasa learning context. More broadly, our results suggest that randomized value functions offer a promising approach to tackling a critical challenge in reinforcement learning: synthesizing efficient exploration and effective generalization.",
"title": ""
},
{
"docid": "3cf7fc89e6a9b7295079dd74014f166b",
"text": "BACKGROUND\nHigh-resolution MRI has been shown to be capable of identifying plaque constituents, such as the necrotic core and intraplaque hemorrhage, in human carotid atherosclerosis. The purpose of this study was to evaluate differential contrast-weighted images, specifically a multispectral MR technique, to improve the accuracy of identifying the lipid-rich necrotic core and acute intraplaque hemorrhage in vivo.\n\n\nMETHODS AND RESULTS\nEighteen patients scheduled for carotid endarterectomy underwent a preoperative carotid MRI examination in a 1.5-T GE Signa scanner using a protocol that generated 4 contrast weightings (T1, T2, proton density, and 3D time of flight). MR images of the vessel wall were examined for the presence of a lipid-rich necrotic core and/or intraplaque hemorrhage. Ninety cross sections were compared with matched histological sections of the excised specimen in a double-blinded fashion. Overall accuracy (95% CI) of multispectral MRI was 87% (80% to 94%), sensitivity was 85% (78% to 92%), and specificity was 92% (86% to 98%). There was good agreement between MRI and histological findings, with a value of kappa=0.69 (0.53 to 0.85).\n\n\nCONCLUSIONS\nMultispectral MRI can identify the lipid-rich necrotic core in human carotid atherosclerosis in vivo with high sensitivity and specificity. This MRI technique provides a noninvasive tool to study the pathogenesis and natural history of carotid atherosclerosis. Furthermore, it will permit a direct assessment of the effect of pharmacological therapy, such as aggressive lipid lowering, on plaque lipid composition.",
"title": ""
}
] | scidocsrr |
531aefed1c715eb557022660055f7803 | Patch-Based Near-Optimal Image Denoising | [
{
"docid": "c1f6052ecf802f1b4b2e9fd515d7ea15",
"text": "In recent years there has been a growing interest in the study of sparse representation of signals. Using an overcomplete dictionary that contains prototype signal-atoms, signals are described by sparse linear combinations of these atoms. Applications that use sparse representation are many and include compression, regularization in inverse problems, feature extraction, and more. Recent activity in this field concentrated mainly on the study of pursuit algorithms that decompose signals with respect to a given dictionary. Designing dictionaries to better fit the above model can be done by either selecting one from a pre-specified set of linear transforms, or by adapting the dictionary to a set of training signals. Both these techniques have been considered, but this topic is largely still open. In this paper we propose a novel algorithm for adapting dictionaries in order to achieve sparse signal representations. Given a set of training signals, we seek the dictionary that leads to the best representation for each member in this set, under strict sparsity constraints. We present a new method – the K-SVD algorithm – generalizing the K-Means clustering process. K-SVD is an iterative method that alternates between sparse coding of the examples based on the current dictionary, and a process of updating the dictionary atoms to better fit the data. The update of the dictionary columns is combined with an update of the sparse representations, thereby accelerating convergence. The K-SVD algorithm is flexible and can work with any pursuit method (e.g., basis pursuit, FOCUSS, or matching pursuit). We analyze this algorithm and demonstrate its results on both synthetic tests and in applications on real image data.",
"title": ""
},
{
"docid": "809aed520d0023535fec644e81ddbb53",
"text": "This paper presents an efficient image denoising scheme by using principal component analysis (PCA) with local pixel grouping (LPG). For a better preservation of image local structures, a pixel and its nearest neighbors are modeled as a vector variable, whose training samples are selected from the local window by using block matching based LPG. Such an LPG procedure guarantees that only the sample blocks with similar contents are used in the local statistics calculation for PCA transform estimation, so that the image local features can be well preserved after coefficient shrinkage in the PCA domain to remove the noise. The LPG-PCA denoising procedure is iterated one more time to further improve the denoising performance, and the noise level is adaptively adjusted in the second stage. Experimental results on benchmark test images demonstrate that the LPG-PCA method achieves very competitive denoising performance, especially in image fine structure preservation, compared with state-of-the-art denoising algorithms. & 2009 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "0771cd99e6ad19deb30b5c70b5c98183",
"text": "We propose a novel image denoising strategy based on an enhanced sparse representation in transform domain. The enhancement of the sparsity is achieved by grouping similar 2D image fragments (e.g., blocks) into 3D data arrays which we call \"groups.\" Collaborative Altering is a special procedure developed to deal with these 3D groups. We realize it using the three successive steps: 3D transformation of a group, shrinkage of the transform spectrum, and inverse 3D transformation. The result is a 3D estimate that consists of the jointly filtered grouped image blocks. By attenuating the noise, the collaborative filtering reveals even the finest details shared by grouped blocks and, at the same time, it preserves the essential unique features of each individual block. The filtered blocks are then returned to their original positions. Because these blocks are overlapping, for each pixel, we obtain many different estimates which need to be combined. Aggregation is a particular averaging procedure which is exploited to take advantage of this redundancy. A significant improvement is obtained by a specially developed collaborative Wiener filtering. An algorithm based on this novel denoising strategy and its efficient implementation are presented in full detail; an extension to color-image denoising is also developed. The experimental results demonstrate that this computationally scalable algorithm achieves state-of-the-art denoising performance in terms of both peak signal-to-noise ratio and subjective visual quality.",
"title": ""
}
] | [
{
"docid": "2cd15bbcc96d92279260a14d7cf471db",
"text": "A converter system comprising series-connected converter submodules based on medium-frequency (MF)-DC/DC converters can replace the conventional traction transformer. This reduces mass and losses. The use of multilevel topology permits the connection to the HV catenary. A suitably chosen DC link voltage avoids excessively oversizing the power semiconductors, while providing sufficient redundancy. The medium-frequency switching performance of typical and dedicated 6.5kV IGBTs has been characterized and is discussed here (→ZCS, ZVS).",
"title": ""
},
{
"docid": "685ff0d68e039aa7aa1cc04468c208f4",
"text": "Automated static analysis can identify potential source code anomalies early in the software process that could lead to field failures. However, only a small portion of static analysis alerts may be important to the developer (actionable). The remainder are false positives (unactionable). We propose a process for building false positive mitigation models to classify static analysis alerts as actionable or unactionable using machine learning techniques. For two open source projects, we identify sets of alert characteristics predictive of actionable and unactionable alerts out of 51 candidate characteristics. From these selected characteristics, we evaluate 15 machine learning algorithms, which build models to classify alerts. We were able to obtain 88-97% average accuracy for both projects in classifying alerts using three to 14 alert characteristics. Additionally, the set of selected alert characteristics and best models differed between the two projects, suggesting that false positive mitigation models should be project-specific.",
"title": ""
},
{
"docid": "ed8be77c2fb68b36c13f71d6afc59076",
"text": "We provide a comparative analysis of the existing MITM (Man-In-The-Middle) attacks on Bluetooth. In addition, we propose a novel Bluetooth MITM attack against Bluetooth- enabled printers that support SSP (Secure Simple Pairing). Our attack is based on the fact that the security of the protocol is likely to be limited by the capabilities of the least powerful or the least secure device type. Moreover, we propose improvements to the existing Bluetooth SSP in order to make it more secure.",
"title": ""
},
{
"docid": "0ebdf5dae3ce2265b9b740aba5484a7c",
"text": "The aim in high-resolution connectomics is to reconstruct complete neuronal connectivity in a tissue. Currently, the only technology capable of resolving the smallest neuronal processes is electron microscopy (EM). Thus, a common approach to network reconstruction is to perform (error-prone) automatic segmentation of EM images, followed by manual proofreading by experts to fix errors. We have developed an algorithm and software library to not only improve the accuracy of the initial automatic segmentation, but also point out the image coordinates where it is likely to have made errors. Our software, called gala (graph-based active learning of agglomeration), improves the state of the art in agglomerative image segmentation. It is implemented in Python and makes extensive use of the scientific Python stack (numpy, scipy, networkx, scikit-learn, scikit-image, and others). We present here the software architecture of the gala library, and discuss several designs that we consider would be generally useful for other segmentation packages. We also discuss the current limitations of the gala library and how we intend to address them.",
"title": ""
},
{
"docid": "be8cfa012ffba4ee8017c3e299a88fb0",
"text": "The present study examined (1) the impact of a brief substance use intervention on delay discounting and indices of substance reward value (RV), and (2) whether baseline values and posttreatment change in these behavioral economic variables predict substance use outcomes. Participants were 97 heavy drinking college students (58.8% female, 41.2% male) who completed a brief motivational intervention (BMI) and then were randomized to one of two conditions: a supplemental behavioral economic intervention that attempted to increase engagement in substance-free activities associated with delayed rewards (SFAS) or an Education control (EDU). Demand intensity, and Omax, decreased and elasticity significantly increased after treatment, but there was no effect for condition. Both baseline values and change in RV, but not discounting, predicted substance use outcomes at 6-month follow-up. Students with high RV who used marijuana were more likely to reduce their use after the SFAS intervention. These results suggest that brief interventions may reduce substance reward value, and that changes in reward value are associated with subsequent drinking and drug use reductions. High RV marijuana users may benefit from intervention elements that enhance future time orientation and substance-free activity participation.",
"title": ""
},
{
"docid": "ed2ac159196ce7cf79eb8ee1c258d3f8",
"text": "To uncover regulatory mechanisms in Hedgehog (Hh) signaling, we conducted genome-wide screens to identify positive and negative pathway components and validated top hits using multiple signaling and differentiation assays in two different cell types. Most positive regulators identified in our screens, including Rab34, Pdcl, and Tubd1, were involved in ciliary functions, confirming the central role for primary cilia in Hh signaling. Negative regulators identified included Megf8, Mgrn1, and an unannotated gene encoding a tetraspan protein we named Atthog. The function of these negative regulators converged on Smoothened (SMO), an oncoprotein that transduces the Hh signal across the membrane. In the absence of Atthog, SMO was stabilized at the cell surface and concentrated in the ciliary membrane, boosting cell sensitivity to the ligand Sonic Hedgehog (SHH) and consequently altering SHH-guided neural cell-fate decisions. Thus, we uncovered genes that modify the interpretation of morphogen signals by regulating protein-trafficking events in target cells.",
"title": ""
},
{
"docid": "169129603c8931bb6ef98b813631db8e",
"text": "Software development is a people intensive activity. The abilities possessed by developers are strongly related to process productivity and final product quality. Thus, one of the most important decisions to be made by a software project manager is how to properly staff the project. However, staffing software projects is not a simple task. There are many alternatives to ponder, several developer-to-activity combinations to evaluate, and the manager may have to choose a team from a larger set of available developers, according to the project and organizational needs. Therefore, to perform the staffing activity with ad hoc procedures can be very difficult and can lead the manager to choose a team that is not the best for a given situation. This work presents an optimization-based approach to support staffing a software project. The staffing problem is modeled and solved as a constraint satisfaction problem. Our approach takes into account the characteristics of the project activities, the available human resources, and constraints established by the software development organization. According to these needs, the project manager selects a utility function to be maximized or minimized by the optimizer. We propose several utility functions, each addressing values that can be sought by the development organization. A decision support tool was implemented and used in an experimental study executed to evaluate the relevance of the proposed approach. 2007 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "02c00d998952d935ee694922953c78d1",
"text": "OBJECTIVE\nEffect of peppermint on exercise performance was previously investigated but equivocal findings exist. This study aimed to investigate the effects of peppermint ingestion on the physiological parameters and exercise performance after 5 min and 1 h.\n\n\nMATERIALS AND METHODS\nThirty healthy male university students were randomly divided into experimental (n=15) and control (n=15) groups. Maximum isometric grip force, vertical and long jumps, spirometric parameters, visual and audio reaction times, blood pressure, heart rate, and breath rate were recorded three times: before, five minutes, and one hour after single dose oral administration of peppermint essential oil (50 µl). Data were analyzed using repeated measures ANOVA.\n\n\nRESULTS\nOur results revealed significant improvement in all of the variables after oral administration of peppermint essential oil. Experimental group compared with control group showed an incremental and a significant increase in the grip force (36.1%), standing vertical jump (7.0%), and standing long jump (6.4%). Data obtained from the experimental group after five minutes exhibited a significant increase in the forced vital capacity in first second (FVC1)(35.1%), peak inspiratory flow rate (PIF) (66.4%), and peak expiratory flow rate (PEF) (65.1%), whereas after one hour, only PIF shown a significant increase as compare with the baseline and control group. At both times, visual and audio reaction times were significantly decreased. Physiological parameters were also significantly improved after five minutes. A considerable enhancement in the grip force, spiromery, and other parameters were the important findings of this study. Conclusion : An improvement in the spirometric measurements (FVC1, PEF, and PIF) might be due to the peppermint effects on the bronchial smooth muscle tonicity with or without affecting the lung surfactant. Yet, no scientific evidence exists regarding isometric force enhancement in this novel study.",
"title": ""
},
{
"docid": "8c89db7cda2547a9f84dec7a0990cd59",
"text": "In this paper, a changeable winding brushless DC (BLDC) motor for the expansion of the speed region is described. The changeable winding BLDC motor is driven by a large number of phase turns at low speeds and by a reduced number of turns at high speeds. For this reason, the section where the winding changes is very important. Ideally, the time at which the windings are to be converted should be same as the time at which the voltage changes. However, if this timing is not exactly synchronized, a large current is generated in the motor, and the demagnetization of the permanent magnet occurs. In addition, a large torque ripple is produced. In this paper, we describe the demagnetization of the permanent magnet in a fault situation when the windings change, and we suggest a design process to solve this problem.",
"title": ""
},
{
"docid": "4bcbe82e888e504fdc5f230de79e14e7",
"text": "In this paper, we present results of an empirical investigation into the social structure of YouTube, addressing friend relations and their correlation with tags applied to uploaded videos. Results indicate that YouTube producers are strongly linked to others producing similar content. Furthermore, there is a socially cohesive core of producers of mixed content, with smaller cohesive groups around Korean music video and anime music videos. Thus, social interaction on YouTube appears to be structured in ways similar to other social networking sites, but with greater semantic coherence around content. These results are explained in terms of the relationship of video producers to the tagging of uploaded content on the site.",
"title": ""
},
{
"docid": "6c89c95f3fcc3c0f1da3f4ae16e0475e",
"text": "Understanding search queries is a hard problem as it involves dealing with “word salad” text ubiquitously issued by users. However, if a query resembles a well-formed question, a natural language processing pipeline is able to perform more accurate interpretation, thus reducing downstream compounding errors. Hence, identifying whether or not a query is well formed can enhance query understanding. Here, we introduce a new task of identifying a well-formed natural language question. We construct and release a dataset of 25,100 publicly available questions classified into well-formed and non-wellformed categories and report an accuracy of 70.7% on the test set. We also show that our classifier can be used to improve the performance of neural sequence-to-sequence models for generating questions for reading comprehension.",
"title": ""
},
{
"docid": "bc7fc7c69813338406d4e4b1828498fe",
"text": "The task of generating natural images from 3D scenes has been a long standing goal in computer graphics. On the other hand, recent developments in deep neural networks allow for trainable models that can produce natural-looking images with little or no knowledge about the scene structure. While the generated images often consist of realistic looking local patterns, the overall structure of the generated images is often inconsistent. In this work we propose a trainable, geometry-aware image generation method that leverages various types of scene information, including geometry and segmentation, to create realistic looking natural images that match the desired scene structure. Our geometrically-consistent image synthesis method is a deep neural network, called Geometry to Image Synthesis (GIS) framework, which retains the advantages of a trainable method, e.g., differentiability and adaptiveness, but, at the same time, makes a step towards the generalizability, control and quality output of modern graphics rendering engines. We utilize the GIS framework to insert vehicles in outdoor driving scenes, as well as to generate novel views of objects from the Linemod dataset. We qualitatively show that our network is able to generalize beyond the training set to novel scene geometries, object shapes and segmentations. Furthermore, we quantitatively show that the GIS framework can be used to synthesize large amounts of training data which proves beneficial for training instance segmentation models.",
"title": ""
},
{
"docid": "b04539950e17001e850efdc246db9acf",
"text": "Today's interconnected computer network is complex and is constantly growing in size . As per OWASP Top10 list 2013[1] the top vulnerability in web application is listed as injection attack. SQL injection[2] is the most dangerous attack among injection attacks. Most of the available techniques provide an incomplete solution. While attacking using SQL injection attacker probably use space, single quotes or double dashes in his input so as to change the indented meaning of the runtime query generated based on these inputs. Stored procedure based and second order SQL injection are two types of SQL injection that are difficult to detect and hence difficult to prevent. This work concentrates on Stored procedure based and second",
"title": ""
},
{
"docid": "a7f2acee9997f3bcb9bbb528bb383a94",
"text": "Identifying sparse salient structures from dense pixels is a longstanding problem in visual computing. Solutions to this problem can benefit both image manipulation and understanding. In this paper, we introduce an image transform based on the L1 norm for piecewise image flattening. This transform can effectively preserve and sharpen salient edges and contours while eliminating insignificant details, producing a nearly piecewise constant image with sparse structures. A variant of this image transform can perform edge-preserving smoothing more effectively than existing state-of-the-art algorithms. We further present a new method for complex scene-level intrinsic image decomposition. Our method relies on the above image transform to suppress surface shading variations, and perform probabilistic reflectance clustering on the flattened image instead of the original input image to achieve higher accuracy. Extensive testing on the Intrinsic-Images-in-the-Wild database indicates our method can perform significantly better than existing techniques both visually and numerically. The obtained intrinsic images have been successfully used in two applications, surface retexturing and 3D object compositing in photographs.",
"title": ""
},
{
"docid": "34690f455f9e539b06006f30dd3e512b",
"text": "Disaster relief operations rely on the rapid deployment of wireless network architectures to provide emergency communications. Future emergency networks will consist typically of terrestrial, portable base stations and base stations on-board low altitude platforms (LAPs). The effectiveness of network deployment will depend on strategically chosen station positions. In this paper a method is presented for calculating the optimal proportion of the two station types and their optimal placement. Random scenarios and a real example from Hurricane Katrina are used for evaluation. The results confirm the strength of LAPs in terms of high bandwidth utilisation, achieved by their ability to cover wide areas, their portability and adaptability to height. When LAPs are utilized, the total required number of base stations to cover a desired area is generally lower. For large scale disasters in particular, this leads to shorter response times and the requirement of fewer resources. This goal can be achieved more easily if algorithms such as the one presented in this paper are used.",
"title": ""
},
{
"docid": "3bbcb7c2c3043e1c1dad59e22286f3e0",
"text": "This study used eye-tracking technology to assess where helpers look as they are providing assistance to a worker during collaborative physical tasks. Gaze direction was coded into one of six categories: partner's head, partner's hands, task parts and tools, the completed task, and instruction manual. Results indicated that helpers rarely gazed at their partners' faces, but distributed gaze fairly evenly across the other targets. The results have implications for the design of video systems to support collaborative physical tasks.",
"title": ""
},
{
"docid": "313c8ba6d61a160786760543658185df",
"text": "In this review, we collate information about ticks identified in different parts of the Sudan and South Sudan since 1956 in order to identify gaps in tick prevalence and create a map of tick distribution. This will avail basic data for further research on ticks and policies for the control of tick-borne diseases. In this review, we discuss the situation in the Republic of South Sudan as well as Sudan. For this purpose we have divided Sudan into four regions, namely northern Sudan (Northern and River Nile states), central Sudan (Khartoum, Gazera, White Nile, Blue Nile and Sennar states), western Sudan (North and South Kordofan and North, South and West Darfour states) and eastern Sudan (Red Sea, Kassala and Gadarif states).",
"title": ""
},
{
"docid": "affa48f455d5949564302b4c23324458",
"text": "MicroRNAs (miRNAs) have within the past decade emerged as key regulators of metabolic homoeostasis. Major tissues in intermediary metabolism important during development of the metabolic syndrome, such as β-cells, liver, skeletal and heart muscle as well as adipose tissue, have all been shown to be affected by miRNAs. In the pancreatic β-cell, a number of miRNAs are important in maintaining the balance between differentiation and proliferation (miR-200 and miR-29 families) and insulin exocytosis in the differentiated state is controlled by miR-7, miR-375 and miR-335. MiR-33a and MiR-33b play crucial roles in cholesterol and lipid metabolism, whereas miR-103 and miR-107 regulates hepatic insulin sensitivity. In muscle tissue, a defined number of miRNAs (miR-1, miR-133, miR-206) control myofibre type switch and induce myogenic differentiation programmes. Similarly, in adipose tissue, a defined number of miRNAs control white to brown adipocyte conversion or differentiation (miR-365, miR-133, miR-455). The discovery of circulating miRNAs in exosomes emphasizes their importance as both endocrine signalling molecules and potentially disease markers. Their dysregulation in metabolic diseases, such as obesity, type 2 diabetes and atherosclerosis stresses their potential as therapeutic targets. This review emphasizes current ideas and controversies within miRNA research in metabolism.",
"title": ""
},
{
"docid": "4b0b7dfa79556970e900a129d06e3b0c",
"text": "We present the science and technology roadmap for graphene, related two-dimensional crystals, and hybrid systems, targeting an evolution in technology, that might lead to impacts and benefits reaching into most areas of society. This roadmap was developed within the framework of the European Graphene Flagship and outlines the main targets and research areas as best understood at the start of this ambitious project. We provide an overview of the key aspects of graphene and related materials (GRMs), ranging from fundamental research challenges to a variety of applications in a large number of sectors, highlighting the steps necessary to take GRMs from a state of raw potential to a point where they might revolutionize multiple industries. We also define an extensive list of acronyms in an effort to standardize the nomenclature in this emerging field.",
"title": ""
},
{
"docid": "44272dd2c30ada5b63cc6244c194c43f",
"text": "This paper proposes a method to achieve fast and fluid human-robot interaction by estimating the progress of the movement of the human. The method allows the progress, also referred to as the phase of the movement, to be estimated even when observations of the human are partial and occluded; a problem typically found when using motion capture systems in cluttered environments. By leveraging on the framework of Interaction Probabilistic Movement Primitives (ProMPs), phase estimation makes it possible to classify the human action, and to generate a corresponding robot trajectory before the human finishes his/her movement. The method is therefore suited for semiautonomous robots acting as assistants and coworkers. Since observations may be sparse, our method is based on computing the probability of different phase candidates to find the phase that best aligns the Interaction ProMP with the current observations. The method is fundamentally different from approaches based on Dynamic Time Warping (DTW) that must rely on a consistent stream of measurements at runtime. The phase estimation algorithm can be seamlessly integrated into Interaction ProMPs such that robot trajectory coordination, phase estimation, and action recognition can all be achieved in a single probabilistic framework. We evaluated the method using a 7-DoF lightweight robot arm equipped with a 5-finger hand in single and multi-task collaborative experiments. We compare the accuracy achieved by phase estimation with our previous method based on DTW.",
"title": ""
}
] | scidocsrr |
241d93d13aff6824ccc7b6221b6bf765 | Imaging human EEG dynamics using independent component analysis | [
{
"docid": "3d5fb6eff6d0d63c17ef69c8130d7a77",
"text": "A new measure of event-related brain dynamics, the event-related spectral perturbation (ERSP), is introduced to study event-related dynamics of the EEG spectrum induced by, but not phase-locked to, the onset of the auditory stimuli. The ERSP reveals aspects of event-related brain dynamics not contained in the ERP average of the same response epochs. Twenty-eight subjects participated in daily auditory evoked response experiments during a 4 day study of the effects of 24 h free-field exposure to intermittent trains of 89 dB low frequency tones. During evoked response testing, the same tones were presented through headphones in random order at 5 sec intervals. No significant changes in behavioral thresholds occurred during or after free-field exposure. ERSPs induced by target pips presented in some inter-tone intervals were larger than, but shared common features with, ERSPs induced by the tones, most prominently a ridge of augmented EEG amplitude from 11 to 18 Hz, peaking 1-1.5 sec after stimulus onset. Following 3-11 h of free-field exposure, this feature was significantly smaller in tone-induced ERSPs; target-induced ERSPs were not similarly affected. These results, therefore, document systematic effects of exposure to intermittent tones on EEG brain dynamics even in the absence of changes in auditory thresholds.",
"title": ""
}
] | [
{
"docid": "55f95c7b59f17fb210ebae97dbd96d72",
"text": "Clustering is a widely studied data mining problem in the text domains. The problem finds numerous applications in customer segmentation, classification, collaborative filtering, visualization, document organization, and indexing. In this chapter, we will provide a detailed survey of the problem of text clustering. We will study the key challenges of the clustering problem, as it applies to the text domain. We will discuss the key methods used for text clustering, and their relative advantages. We will also discuss a number of recent advances in the area in the context of social network and linked data.",
"title": ""
},
{
"docid": "c063474634eb427cf0215b4500182f8c",
"text": "Factorization Machines offer good performance and useful embeddings of data. However, they are costly to scale to large amounts of data and large numbers of features. In this paper we describe DiFacto, which uses a refined Factorization Machine model with sparse memory adaptive constraints and frequency adaptive regularization. We show how to distribute DiFacto over multiple machines using the Parameter Server framework by computing distributed subgradients on minibatches asynchronously. We analyze its convergence and demonstrate its efficiency in computational advertising datasets with billions examples and features.",
"title": ""
},
{
"docid": "8b760eff727b1119ff73ec1ba234a675",
"text": "Substrate integrated waveguide (SIW) is a new high Q, low loss, low cost, easy processing and integrating planar waveguide structure, which can be widely used in microwave and millimeter-wave integrated circuit. A five-elements resonant slot array antenna at 35GHz has been designed in this paper with a bandwidth of 500MHz (S11<;-15dB), gain of 11.5dB and sidelobe level (SLL) of -23.5dB (using Taylor weighted), which has a small size, low cost and is easy to integrate, etc.",
"title": ""
},
{
"docid": "d35c44a54eaa294a60379b00dd0ce270",
"text": "Adopting deep learning methods for human activity recognition has been effective in extracting discriminative features from raw input sequences acquired from body-worn sensors. Although human movements are encoded in a sequence of successive samples in time, typical machine learning methods perform recognition tasks without exploiting the temporal correlations between input data samples. Convolutional neural networks (CNNs) address this issue by using convolutions across a one-dimensional temporal sequence to capture dependencies among input data. However, the size of convolutional kernels restricts the captured range of dependencies between data samples. As a result, typical models are unadaptable to a wide range of activity-recognition configurations and require fixed-length input windows. In this paper, we propose the use of deep recurrent neural networks (DRNNs) for building recognition models that are capable of capturing long-range dependencies in variable-length input sequences. We present unidirectional, bidirectional, and cascaded architectures based on long short-term memory (LSTM) DRNNs and evaluate their effectiveness on miscellaneous benchmark datasets. Experimental results show that our proposed models outperform methods employing conventional machine learning, such as support vector machine (SVM) and k-nearest neighbors (KNN). Additionally, the proposed models yield better performance than other deep learning techniques, such as deep believe networks (DBNs) and CNNs.",
"title": ""
},
{
"docid": "5ebefc9d5889cb9c7e3f83a8b38c4cb4",
"text": "As organizations start to use data-intensive cluster computing systems like Hadoop and Dryad for more applications, there is a growing need to share clusters between users. However, there is a conflict between fairness in scheduling and data locality (placing tasks on nodes that contain their input data). We illustrate this problem through our experience designing a fair scheduler for a 600-node Hadoop cluster at Facebook. To address the conflict between locality and fairness, we propose a simple algorithm called delay scheduling: when the job that should be scheduled next according to fairness cannot launch a local task, it waits for a small amount of time, letting other jobs launch tasks instead. We find that delay scheduling achieves nearly optimal data locality in a variety of workloads and can increase throughput by up to 2x while preserving fairness. In addition, the simplicity of delay scheduling makes it applicable under a wide variety of scheduling policies beyond fair sharing.",
"title": ""
},
{
"docid": "f0245dca8cc1d3c418c0d915c7982484",
"text": "The injection of a high-frequency signal in the stator via inverter has been shown to be a viable option to estimate the magnet temperature in permanent-magnet synchronous machines (PMSMs). The variation of the magnet resistance with temperature is reflected in the stator high-frequency resistance, which can be measured from the resulting current when a high-frequency voltage is injected. However, this method is sensitive to d- and q-axis inductance (Ld and Lq) variations, as well as to the machine speed. In addition, it is only suitable for surface PMSMs (SPMSMs) and inadequate for interior PMSMs (IPMSMs). In this paper, the use of a pulsating high-frequency current injection in the d-axis of the machine for temperature estimation purposes is proposed. The proposed method will be shown to be insensitive to the speed, Lq, and Ld variations. Furthermore, it can be used with both SPMSMs and IPMSMs.",
"title": ""
},
{
"docid": "c927ca7a74732032dd7a0b8ea907640b",
"text": "We propose a Bayesian optimization algorithm for objective functions that are sums or integrals of expensive-to-evaluate functions, allowing noisy evaluations. These objective functions arise in multi-task Bayesian optimization for tuning machine learning hyperparameters, optimization via simulation, and sequential design of experiments with random environmental conditions. Our method is average-case optimal by construction when a single evaluation of the integrand remains within our evaluation budget. Achieving this one-step optimality requires solving a challenging value of information optimization problem, for which we provide a novel efficient discretization-free computational method. We also provide consistency proofs for our method in both continuum and discrete finite domains for objective functions that are sums. In numerical experiments comparing against previous state-of-the-art methods, including those that also leverage sum or integral structure, our method performs as well or better across a wide range of problems and offers significant improvements when evaluations are noisy or the integrand varies smoothly in the integrated variables.",
"title": ""
},
{
"docid": "7cbe504e03ab802389c48109ed1f1802",
"text": "Despite recent breakthroughs in the applications of deep neural networks, one setting that presents a persistent challenge is that of “one-shot learning.” Traditional gradient-based networks require a lot of data to learn, often through extensive iterative training. When new data is encountered, the models must inefficiently relearn their parameters to adequately incorporate the new information without catastrophic interference. Architectures with augmented memory capacities, such as Neural Turing Machines (NTMs), offer the ability to quickly encode and retrieve new information, and hence can potentially obviate the downsides of conventional models. Here, we demonstrate the ability of a memory-augmented neural network to rapidly assimilate new data, and leverage this data to make accurate predictions after only a few samples. We also introduce a new method for accessing an external memory that focuses on memory content, unlike previous methods that additionally use memory locationbased focusing mechanisms.",
"title": ""
},
{
"docid": "4c82a4e51633b87f2f6b2619ca238686",
"text": "Allocentric space is mapped by a widespread brain circuit of functionally specialized cell types located in interconnected subregions of the hippocampal-parahippocampal cortices. Little is known about the neural architectures required to express this variety of firing patterns. In rats, we found that one of the cell types, the grid cell, was abundant not only in medial entorhinal cortex (MEC), where it was first reported, but also in pre- and parasubiculum. The proportion of grid cells in pre- and parasubiculum was comparable to deep layers of MEC. The symmetry of the grid pattern and its relationship to the theta rhythm were weaker, especially in presubiculum. Pre- and parasubicular grid cells intermingled with head-direction cells and border cells, as in deep MEC layers. The characterization of a common pool of space-responsive cells in architecturally diverse subdivisions of parahippocampal cortex constrains the range of mechanisms that might give rise to their unique functional discharge phenotypes.",
"title": ""
},
{
"docid": "0f9a33f8ef5c9c415cf47814c9ef896d",
"text": "BACKGROUND\nNeuropathic pain is one of the most devastating kinds of chronic pain. Neuroinflammation has been shown to contribute to the development of neuropathic pain. We have previously demonstrated that lumbar spinal cord-infiltrating CD4+ T lymphocytes contribute to the maintenance of mechanical hypersensitivity in spinal nerve L5 transection (L5Tx), a murine model of neuropathic pain. Here, we further examined the phenotype of the CD4+ T lymphocytes involved in the maintenance of neuropathic pain-like behavior via intracellular flow cytometric analysis and explored potential interactions between infiltrating CD4+ T lymphocytes and spinal cord glial cells.\n\n\nRESULTS\nWe consistently observed significantly higher numbers of T-Bet+, IFN-γ+, TNF-α+, and GM-CSF+, but not GATA3+ or IL-4+, lumbar spinal cord-infiltrating CD4+ T lymphocytes in the L5Tx group compared to the sham group at day 7 post-L5Tx. This suggests that the infiltrating CD4+ T lymphocytes expressed a pro-inflammatory type 1 phenotype (Th1). Despite the observation of CD4+ CD40 ligand (CD154)+ T lymphocytes in the lumbar spinal cord post-L5Tx, CD154 knockout (KO) mice did not display significant changes in L5Tx-induced mechanical hypersensitivity, indicating that T lymphocyte-microglial interaction through the CD154-CD40 pathway is not necessary for L5Tx-induced hypersensitivity. In addition, spinal cord astrocytic activation, represented by glial fibillary acidic protein (GFAP) expression, was significantly lower in CD4 KO mice compared to wild type (WT) mice at day 14 post-L5Tx, suggesting the involvement of astrocytes in the pronociceptive effects mediated by infiltrating CD4+ T lymphocytes.\n\n\nCONCLUSIONS\nIn all, these data indicate that the maintenance of L5Tx-induced neuropathic pain is mostly mediated by Th1 cells in a CD154-independent manner via a mechanism that could involve multiple Th1 cytokines and astrocytic activation.",
"title": ""
},
{
"docid": "0182e6dcf7c8ec981886dfa2586a0d5d",
"text": "MOTIVATION\nMetabolomics is a post genomic technology which seeks to provide a comprehensive profile of all the metabolites present in a biological sample. This complements the mRNA profiles provided by microarrays, and the protein profiles provided by proteomics. To test the power of metabolome analysis we selected the problem of discrimating between related genotypes of Arabidopsis. Specifically, the problem tackled was to discrimate between two background genotypes (Col0 and C24) and, more significantly, the offspring produced by the crossbreeding of these two lines, the progeny (whose genotypes would differ only in their maternally inherited mitichondia and chloroplasts).\n\n\nOVERVIEW\nA gas chromotography--mass spectrometry (GCMS) profiling protocol was used to identify 433 metabolites in the samples. The metabolomic profiles were compared using descriptive statistics which indicated that key primary metabolites vary more than other metabolites. We then applied neural networks to discriminate between the genotypes. This showed clearly that the two background lines can be discrimated between each other and their progeny, and indicated that the two progeny lines can also be discriminated. We applied Euclidean hierarchical and Principal Component Analysis (PCA) to help understand the basis of genotype discrimination. PCA indicated that malic acid and citrate are the two most important metabolites for discriminating between the background lines, and glucose and fructose are two most important metabolites for discriminating between the crosses. These results are consistant with genotype differences in mitochondia and chloroplasts.",
"title": ""
},
{
"docid": "9524269df0e8fbae27ee4e63d47b327b",
"text": "The quantum of power that a given EHVAC transmission line can safely carry depends on various limits. These limits can be categorized into two types viz. thermal and stability/SIL limits. In case of long lines the capacity is limited by its SIL level only which is much below its thermal capacity due to large inductance. Decrease in line inductance and surge impedance shall increase the SIL and transmission capacity. This paper presents a mathematical model of increasing the SIL level towards thermal limit. Sensitivity of SIL on various configuration of sub-conductors in a bundle, bundle spacing, tower structure, spacing of phase conductors etc. is analyzed and presented. Various issues that need attention for application of high surge impedance loading (HSIL) line are also deliberated",
"title": ""
},
{
"docid": "d6aba23081e11b61d146276e77b3d3cd",
"text": "This paper presents a quantitative performance analysis of a conventional passive cell balancing method and a proposed active cell balancing method for automotive batteries. The proposed active cell balancing method was designed to perform continuous cell balancing during charge and discharge with high balancing current. An experimentally validated model was used to simulate the balancing process of both balancing circuits for a high capacity battery module. The results suggest that the proposed method can improve the power loss and extend the discharge time of a battery module. Hence, a higher energy output can be yielded.",
"title": ""
},
{
"docid": "69519dd7e60899acd8b81c141321b052",
"text": "In this paper we address the question of how closely everyday human teachers match a theoretically optimal teacher. We present two experiments in which subjects teach a concept to our robot in a supervised fashion. In the first experiment we give subjects no instructions on teaching and observe how they teach naturally as compared to an optimal strategy. We find that people are suboptimal in several dimensions. In the second experiment we try to elicit the optimal teaching strategy. People can teach much faster using the optimal teaching strategy, however certain parts of the strategy are more intuitive than others.",
"title": ""
},
{
"docid": "1e6c2319e7c9e51cd4e31107d56bce91",
"text": "Marketing has been criticised from all spheres today since the real worth of all the marketing efforts can hardly be precisely determined. Today consumers are better informed and also misinformed at times due to the bombardment of various pieces of information through a new type of interactive media, i.e., social media (SM). In SM, communication is through dialogue channels wherein consumers pay more attention to SM buzz rather than promotions of marketers. The various forms of SM create a complex set of online social networks (OSN), through which word-of-mouth (WOM) propagates and influence consumer decisions. With the growth of OSN and user generated contents (UGC), WOM metamorphoses to electronic word-of-mouth (eWOM), which spreads in astronomical proportions. Previous works study the effect of external and internal influences in affecting consumer behaviour. However, today the need is to resort to multidisciplinary approaches to find out how SM influence consumers with eWOM and online reviews. This paper reviews the emerging trend of how multiple disciplines viz. Statistics, Data Mining techniques, Network Analysis, etc. are being integrated by marketers today to analyse eWOM and derive actionable intelligence.",
"title": ""
},
{
"docid": "3c315e5cbf13ffca10f4199d094d2f34",
"text": "Object tracking under complex circumstances is a challenging task because of background interference, obstacle occlusion, object deformation, etc. Given such conditions, robustly detecting, locating, and analyzing a target through single-feature representation are difficult tasks. Global features, such as color, are widely used in tracking, but may cause the object to drift under complex circumstances. Local features, such as HOG and SIFT, can precisely represent rigid targets, but these features lack the robustness of an object in motion. An effective method is adaptive fusion of multiple features in representing targets. The process of adaptively fusing different features is the key to robust object tracking. This study uses a multi-feature joint descriptor (MFJD) and the distance between joint histograms to measure the similarity between a target and its candidate patches. Color and HOG features are fused as the tracked object of the joint representation. This study also proposes a self-adaptive multi-feature fusion strategy that can adaptively adjust the joint weight of the fused features based on their stability and contrast measure scores. The mean shift process is adopted as the object tracking framework with multi-feature representation. The experimental results demonstrate that the proposed MFJD tracking method effectively handles background clutter, partial occlusion by obstacles, scale changes, and deformations. The novel method performs better than several state-of-the-art methods in real surveillance scenarios.",
"title": ""
},
{
"docid": "0b33249df17737a826dcaa197adccb74",
"text": "In the competitive electricity structure, demand response programs enable customers to react dynamically to changes in electricity prices. The implementation of such programs may reduce energy costs and increase reliability. To fully harness such benefits, existing load controllers and appliances need around-the-clock price information. Advances in the development and deployment of advanced meter infrastructures (AMIs), building automation systems (BASs), and various dedicated embedded control systems provide the capability to effectively address this requirement. In this paper we introduce a meter gateway architecture (MGA) to serve as a foundation for integrated control of loads by energy aggregators, facility hubs, and intelligent appliances. We discuss the requirements that motivate the architecture, describe its design, and illustrate its application to a small system with an intelligent appliance and a legacy appliance using a prototype implementation of an intelligent hub for the MGA and ZigBee wireless communications.",
"title": ""
},
{
"docid": "573f12acd3193045104c7d95bbc89f78",
"text": "Automatic Face Recognition is one of the most emphasizing dilemmas in diverse of potential relevance like in different surveillance systems, security systems, authentication or verification of individual like criminals etc. Adjoining of dynamic expression in face causes a broad range of discrepancies in recognition systems. Facial Expression not only exposes the sensation or passion of any person but can also be used to judge his/her mental views and psychosomatic aspects. This paper is based on a complete survey of face recognition conducted under varying facial expressions. In order to analyze different techniques, motion-based, model-based and muscles-based approaches have been used in order to handle the facial expression and recognition catastrophe. The analysis has been completed by evaluating various existing algorithms while comparing their results in general. It also expands the scope for other researchers for answering the question of effectively dealing with such problems.",
"title": ""
},
{
"docid": "a47d001dc8305885e42a44171c9a94b2",
"text": "Community detection in complex network has become a vital step to understand the structure and dynamics of networks in various fields. However, traditional node clustering and relatively new proposed link clustering methods have inherent drawbacks to discover overlapping communities. Node clustering is inadequate to capture the pervasive overlaps, while link clustering is often criticized due to the high computational cost and ambiguous definition of communities. So, overlapping community detection is still a formidable challenge. In this work, we propose a new overlapping community detection algorithm based on network decomposition, called NDOCD. Specifically, NDOCD iteratively splits the network by removing all links in derived link communities, which are identified by utilizing node clustering technique. The network decomposition contributes to reducing the computation time and noise link elimination conduces to improving the quality of obtained communities. Besides, we employ node clustering technique rather than link similarity measure to discover link communities, thus NDOCD avoids an ambiguous definition of community and becomes less time-consuming. We test our approach on both synthetic and real-world networks. Results demonstrate the superior performance of our approach both in computation time and accuracy compared to state-of-the-art algorithms.",
"title": ""
},
{
"docid": "5687ab1eadd481b6008835817a5dbe0b",
"text": "Due to the importance of PM synchronous machine in many categories like in industrial, mechatronics, automotive, energy storage flywheel, centrifugal compressor, vacuum pump and robotic applications moreover in smart power grid applications, this paper is presented. It reviews the improvement of permanent magnet synchronous machines performance researches. This is done depending on many researchers' papers as samples for many aspects like: modelling, control, optimization and design to present a satisfied literature review",
"title": ""
}
] | scidocsrr |
24c70b1ee4001017b1ef9740520874dd | Compositional Vector Space Models for Knowledge Base Inference | [
{
"docid": "8b46e6e341f4fdf4eb18e66f237c4000",
"text": "We present a general learning-based approach for phrase-level sentiment analysis that adopts an ordinal sentiment scale and is explicitly compositional in nature. Thus, we can model the compositional effects required for accurate assignment of phrase-level sentiment. For example, combining an adverb (e.g., “very”) with a positive polar adjective (e.g., “good”) produces a phrase (“very good”) with increased polarity over the adjective alone. Inspired by recent work on distributional approaches to compositionality, we model each word as a matrix and combine words using iterated matrix multiplication, which allows for the modeling of both additive and multiplicative semantic effects. Although the multiplication-based matrix-space framework has been shown to be a theoretically elegant way to model composition (Rudolph and Giesbrecht, 2010), training such models has to be done carefully: the optimization is nonconvex and requires a good initial starting point. This paper presents the first such algorithm for learning a matrix-space model for semantic composition. In the context of the phrase-level sentiment analysis task, our experimental results show statistically significant improvements in performance over a bagof-words model.",
"title": ""
},
{
"docid": "78cda62ca882bb09efc08f7d4ea1801e",
"text": "Open Domain: There are nearly an unbounded number of classes, objects and relations Missing Data: Many useful facts are never explicitly stated No Negative Examples: Labeling positive and negative examples for all interesting relations is impractical Learning First-Order Horn Clauses from Web Text Stefan Schoenmackers Oren Etzioni Daniel S. Weld Jesse Davis Turing Center, University of Washington Katholieke Universiteit Leuven",
"title": ""
}
] | [
{
"docid": "011ff2d5995a46a686d9edb80f33b8ca",
"text": "In the era of Social Computing, the role of customer reviews and ratings can be instrumental in predicting the success and sustainability of businesses as customers and even competitors use them to judge the quality of a business. Yelp is one of the most popular websites for users to write such reviews. This rating can be subjective and biased toward user's personality. Business preferences of a user can be decrypted based on his/ her past reviews. In this paper, we deal with (i) uncovering latent topics in Yelp data based on positive and negative reviews using topic modeling to learn which topics are the most frequent among customer reviews, (ii) sentiment analysis of users' reviews to learn how these topics associate to a positive or negative rating which will help businesses improve their offers and services, and (iii) predicting unbiased ratings from user-generated review text alone, using Linear Regression model. We also perform data analysis to get some deeper insights into customer reviews.",
"title": ""
},
{
"docid": "a7aac88bd2862bafc2b4e1e562a7b86a",
"text": "Longitudinal melanonychia presents in various conditions including neoplastic and reactive disorders. It is much more frequently seen in non-Caucasians than Caucasians. While most cases of nail apparatus melanoma start as longitudinal melanonychia, melanocytic nevi of the nail apparatus also typically accompany longitudinal melanonychia. Identifying the suspicious longitudinal melanonychia is therefore an important task for dermatologists. Dermoscopy provides useful information for making this decision. The most suspicious dermoscopic feature of early nail apparatus melanoma is irregular lines on a brown background. Evaluation of the irregularity may be rather subjective, but through experience, dermatologists can improve their diagnostic skills of longitudinal melanonychia, including benign conditions showing regular lines. Other important dermoscopic features of early nail apparatus melanoma are micro-Hutchinson's sign, a wide pigmented band, and triangular pigmentation on the nail plate. Although there is as yet no solid evidence concerning the frequency of dermoscopic follow up, we recommend checking the suspicious longitudinal melanonychia every 6 months. Moreover, patients with longitudinal melanonychia should be asked to return to the clinic quickly if the lesion shows obvious changes. Diagnosis of amelanotic or hypomelanotic melanoma affecting the nail apparatus is also challenging, but melanoma should be highly suspected if remnants of melanin granules are detected dermoscopically.",
"title": ""
},
{
"docid": "f7aceafa35aaacb5b2b854a8b7e275b6",
"text": "In this paper, the study and implementation of a high frequency pulse LED driver with self-oscillating circuit is presented. The self-oscillating half-bridge series resonant inverter is adopted in this LED driver and the circuit characteristics of LED with high frequency pulse driving voltage is also discussed. LED module is connected with full bridge diode rectifier but without low pass filter and this LED module is driven with high frequency pulse. In additional, the self-oscillating resonant circuit with saturable core is used to achieve zero voltage switching and to control the LED current. The LED equivalent circuit of resonant circuit and the operating principle of the self-oscillating half-bridge inverter are discussed in detail. Finally, an 18 W high frequency pulse LED driver is implemented to verify the feasibility. Experimental results show that the circuit efficiency is over 86.5% when input voltage operating within AC 110 ± 10 Vrms and the maximum circuit efficiency is up to 89.2%.",
"title": ""
},
{
"docid": "e729c06c5a4153af05740a01509ee5d5",
"text": "Understanding large-scale document collections in an efficient manner is an important problem. Usually, document data are associated with other information (e.g., an author's gender, age, and location) and their links to other entities (e.g., co-authorship and citation networks). For the analysis of such data, we often have to reveal common as well as discriminative characteristics of documents with respect to their associated information, e.g., male- vs. female-authored documents, old vs. new documents, etc. To address such needs, this paper presents a novel topic modeling method based on joint nonnegative matrix factorization, which simultaneously discovers common as well as discriminative topics given multiple document sets. Our approach is based on a block-coordinate descent framework and is capable of utilizing only the most representative, thus meaningful, keywords in each topic through a novel pseudo-deflation approach. We perform both quantitative and qualitative evaluations using synthetic as well as real-world document data sets such as research paper collections and nonprofit micro-finance data. We show our method has a great potential for providing in-depth analyses by clearly identifying common and discriminative topics among multiple document sets.",
"title": ""
},
{
"docid": "74a3c4dae9573325b292da736d46a78e",
"text": "Machine learning is currently dominated by largely experimental work focused on improvements in a few key tasks. However, the impressive accuracy numbers of the best performing models are questionable because the same test sets have been used to select these models for multiple years now. To understand the danger of overfitting, we measure the accuracy of CIFAR-10 classifiers by creating a new test set of truly unseen images. Although we ensure that the new test set is as close to the original data distribution as possible, we find a large drop in accuracy (4% to 10%) for a broad range of deep learning models. Yet, more recent models with higher original accuracy show a smaller drop and better overall performance, indicating that this drop is likely not due to overfitting based on adaptivity. Instead, we view our results as evidence that current accuracy numbers are brittle and susceptible to even minute natural variations in the data distribution.",
"title": ""
},
{
"docid": "1ec8f8e1b34ebcf8a0c99975d2fa58c4",
"text": "BACKGROUND\nTo compare simultaneous recordings from an external patch system specifically designed to ensure better P-wave recordings and standard Holter monitor to determine diagnostic efficacy. Holter monitors are a mainstay of clinical practice, but are cumbersome to access and wear and P-wave signal quality is frequently inadequate.\n\n\nMETHODS\nThis study compared the diagnostic efficacy of the P-wave centric electrocardiogram (ECG) patch (Carnation Ambulatory Monitor) to standard 3-channel (leads V1, II, and V5) Holter monitor (Northeast Monitoring, Maynard, MA). Patients were referred to a hospital Holter clinic for standard clinical indications. Each patient wore both devices simultaneously and served as their own control. Holter and Patch reports were read in a blinded fashion by experienced electrophysiologists unaware of the findings in the other corresponding ECG recording. All patients, technicians, and physicians completed a questionnaire on comfort and ease of use, and potential complications.\n\n\nRESULTS\nIn all 50 patients, the P-wave centric patch recording system identified rhythms in 23 patients (46%) that altered management, compared to 6 Holter patients (12%), P<.001. The patch ECG intervals PR, QRS and QT correlated well with the Holter ECG intervals having correlation coefficients of 0.93, 0.86, and 0.94, respectively. Finally, 48 patients (96%) preferred wearing the patch monitor.\n\n\nCONCLUSIONS\nA single-channel ambulatory patch ECG monitor, designed specifically to ensure that the P-wave component of the ECG be visible, resulted in a significantly improved rhythm diagnosis and avoided inaccurate diagnoses made by the standard 3-channel Holter monitor.",
"title": ""
},
{
"docid": "fa0f3d0d78040d6b89087c24d8b7c07c",
"text": "Named entity recognition is a crucial component of biomedical natural language processing, enabling information extraction and ultimately reasoning over and knowledge discovery from text. Much progress has been made in the design of rule-based and supervised tools, but they are often genre and task dependent. As such, adapting them to different genres of text or identifying new types of entities requires major effort in re-annotation or rule development. In this paper, we propose an unsupervised approach to extracting named entities from biomedical text. We describe a stepwise solution to tackle the challenges of entity boundary detection and entity type classification without relying on any handcrafted rules, heuristics, or annotated data. A noun phrase chunker followed by a filter based on inverse document frequency extracts candidate entities from free text. Classification of candidate entities into categories of interest is carried out by leveraging principles from distributional semantics. Experiments show that our system, especially the entity classification step, yields competitive results on two popular biomedical datasets of clinical notes and biological literature, and outperforms a baseline dictionary match approach. Detailed error analysis provides a road map for future work.",
"title": ""
},
{
"docid": "9d9afbd6168c884f54f72d3daea57ca7",
"text": "0167-8655/$ see front matter 2009 Elsevier B.V. A doi:10.1016/j.patrec.2009.06.012 * Corresponding author. Tel.: +82 2 705 8931; fax: E-mail addresses: [email protected] (S. Yoon), sa Computer aided diagnosis (CADx) systems for digitized mammograms solve the problem of classification between benign and malignant tissues while studies have shown that using only a subset of features generated from the mammograms can yield higher classification accuracy. To this end, we propose a mutual information-based Support Vector Machine Recursive Feature Elimination (SVM-RFE) as the classification method with feature selection in this paper. We have conducted extensive experiments on publicly available mammographic data and the obtained results indicate that the proposed method outperforms other SVM and SVM-RFE-based methods. 2009 Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "cabfa3e645415d491ed4ca776b9e370a",
"text": "The impact of social networks in customer buying decisions is rapidly increasing, because they are effective in shaping public opinion. This paper helps marketers analyze a social network’s members based on different characteristics as well as choose the best method for identifying influential people among them. Marketers can then use these influential people as seeds for market products/services. Considering the importance of opinion leadership in social networks, the authors provide a comprehensive overview of existing literature. Studies show that different titles (such as opinion leaders, influential people, market mavens, and key players) are used to refer to the influential group in social networks. In this paper, all the properties presented for opinion leaders in the form of different titles are classified into three general categories, including structural, relational, and personal characteristics. Furthermore, based on studying opinion leader identification methods, appropriate parameters are extracted in a comprehensive chart to evaluate and compare these methods accurately. based marketing, word-of-mouth marketing has more creditability (Li & Du, 2011), because there is no direct link between the sender and the merchant. As a result, information is considered independent and subjective. In recent years, many researches in word-of-mouth marketing investigate discovering influential nodes in a social network. These influential people are called opinion leaders in the literature. Organizations interested in e-commerce need to identify opinion leaders among their customers, also the place (web site) which they are going online. This is the place they can market their products. DOI: 10.4018/jvcsn.2011010105 44 International Journal of Virtual Communities and Social Networking, 3(1), 43-59, January-March 2011 Copyright © 2011, IGI Global. Copying or distributing in print or electronic forms without written permission of IGI Global is prohibited. Social Network Analysis Regarding the importance of interpersonal relationship, studies are looking for formal methods to measures who talks to whom in a community. These methods are known as social network analysis (Scott, 1991; Wasserman & Faust, 1994; Rogers & Kincaid, 1981; Valente & Davis, 1999). Social network analysis includes the study of the interpersonal relationships. It usually is more focused on the network itself, rather than on the attributes of the members (Li & Du, 2011). Valente and Rogers (1995) have described social network analysis from the point of view of interpersonal communication by “formal methods of measuring who talks to whom within a community”. Social network analysis enables researchers to identify people who are more central in the network and so more influential. By using these central people or opinion leaders as seeds diffusion of a new product or service can be accelerated (Katz & Lazarsfeld, 1955; Valente & Davis, 1999). Importance of Social Networks for Marketing The importance of social networks as a marketing tool is increasing, and it includes diverse areas (Even-Dar & Shapirab, 2011). Analysis of interdependencies between customers can improve targeted marketing as well as help organization in acquisition of new customers who are not detectable by traditional techniques. By recent technological developments social networks are not limited in face-to-face and physical relationships. Furthermore, online social networks have become a new medium for word-of-mouth marketing. Although the face-to-face word-of-mouth has a greater impact on consumer purchasing decisions over printed information because of its vividness and credibility, in recent years with the growth of the Internet and virtual communities the written word-of-mouth (word-of-mouse) has been created in the online channels (Mak, 2008). Consider a company that wants to launch a new product. This company can benefit from popular social networks like Facebook and Myspace rather than using classical advertising channels. Then, convincing several key persons in each network to adopt the new product, can help a company to exploit an effective diffusion in the network through word-of-mouth. According to Nielsen’s survey of more than 26,000 internet uses, 78% of respondents exhibited recommendations from others are the most trusted source when considering a product or service (Nielsen, 2007). Based on another study conducted by Deloitte’s Consumer Products group, almost 62% of consumers who read consumer-written product reviews online declare their purchase decisions have been directly influenced by the user reviews (Delottie, 2007). Empirical studies have demonstrated that new ideas and practices spread through interpersonal communication (Valente & Rogers, 1995; Valente & Davis, 1999; Valente, 1995). Hawkins et al. (1995) suggest that companies can use four possible courses of action, including marketing research, product sampling, retailing/personal selling and advertising to use their knowledge of opinion leaders to their advantage. The authors of this paper in a similar study have done a review of related literature using social networks for improving marketing response. They discuss the benefits and challenges of utilizing interpersonal relationships in a network as well as opinion leader identification; also, a three step process to show how firms can apply social networks for their marketing activities has been proposed (Jafari Momtaz et al., 2011). While applications of opinion leadership in business and marketing have been widely studied, it generally deals with the development of measurement scale (Burt, 1999), its importance in the social sciences (Flynn et al., 1994), and its application to various areas related to the marketing, such as the health care industry, political science (Burt, 1999) and public communications (Howard et al., 2000; Locock et al., 2001). In this paper, a comprehensive review of studies in the field of opinion leadership and employing social networks to improve the marketing response is done. In the next sec15 more pages are available in the full version of this document, which may be purchased using the \"Add to Cart\" button on the product's webpage: www.igi-global.com/article/identifying-opinion-leadersmarketing-analyzing/60541?camid=4v1 This title is available in InfoSci-Journals, InfoSci-Journal Disciplines Communications and Social Science, InfoSciCommunications, Online Engagement, and Media eJournal Collection. Recommend this product to your librarian: www.igi-global.com/e-resources/libraryrecommendation/?id=2",
"title": ""
},
{
"docid": "8250999ad1b7278ff123cd3c89b5d2d9",
"text": "Drawing on Bronfenbrenner’s ecological theory and prior empirical research, the current study examines the way that blogging and social networking may impact feelings of connection and social support, which in turn could impact maternal well-being (e.g., marital functioning, parenting stress, and depression). One hundred and fifty-seven new mothers reported on their media use and various well-being variables. On average, mothers were 27 years old (SD = 5.15) and infants were 7.90 months old (SD = 5.21). All mothers had access to the Internet in their home. New mothers spent approximately 3 hours on the computer each day, with most of this time spent on the Internet. Findings suggested that frequency of blogging predicted feelings of connection to extended family and friends which then predicted perceptions of social support. This in turn predicted maternal well-being, as measured by marital satisfaction, couple conflict, parenting stress, and depression. In sum, blogging may improve new mothers’ well-being, as they feel more connected to the world outside their home through the Internet.",
"title": ""
},
{
"docid": "aa5d8162801abcc81ac542f7f2a423e5",
"text": "Prediction of popularity has profound impact for social media, since it offers opportunities to reveal individual preference and public attention from evolutionary social systems. Previous research, although achieves promising results, neglects one distinctive characteristic of social data, i.e., sequentiality. For example, the popularity of online content is generated over time with sequential post streams of social media. To investigate the sequential prediction of popularity, we propose a novel prediction framework called Deep Temporal Context Networks (DTCN) by incorporating both temporal context and temporal attention into account. Our DTCN contains three main components, from embedding, learning to predicting. With a joint embedding network, we obtain a unified deep representation of multi-modal user-post data in a common embedding space. Then, based on the embedded data sequence over time, temporal context learning attempts to recurrently learn two adaptive temporal contexts for sequential popularity. Finally, a novel temporal attention is designed to predict new popularity (the popularity of a new userpost pair) with temporal coherence across multiple time-scales. Experiments on our released image dataset with about 600K Flickr photos demonstrate that DTCN outperforms state-of-the-art deep prediction algorithms, with an average of 21.51% relative performance improvement in the popularity prediction (Spearman Ranking Correlation).",
"title": ""
},
{
"docid": "708c9b97f4a393ac49688d913b1d2cc6",
"text": "Cognitive NLP systemsi.e., NLP systems that make use of behavioral data augment traditional text-based features with cognitive features extracted from eye-movement patterns, EEG signals, brain-imaging etc.. Such extraction of features is typically manual. We contend that manual extraction of features may not be the best way to tackle text subtleties that characteristically prevail in complex classification tasks like sentiment analysis and sarcasm detection, and that even the extraction and choice of features should be delegated to the learning system. We introduce a framework to automatically extract cognitive features from the eye-movement / gaze data of human readers reading the text and use them as features along with textual features for the tasks of sentiment polarity and sarcasm detection. Our proposed framework is based on Convolutional Neural Network (CNN). The CNN learns features from both gaze and text and uses them to classify the input text. We test our technique on published sentiment and sarcasm labeled datasets, enriched with gaze information, to show that using a combination of automatically learned text and gaze features often yields better classification performance over (i) CNN based systems that rely on text input alone and (ii) existing systems that rely on handcrafted gaze and textual features.",
"title": ""
},
{
"docid": "d5f905fb66ba81ecde0239a4cc3bfe3f",
"text": "Bidirectional path tracing (BDPT) can render highly realistic scenes with complicated lighting scenarios. The Light Vertex Cache (LVC) based BDPT method by Davidovic et al. [Davidovič et al. 2014] provided good performance on scenes with simple materials in a progressive rendering scenario. In this paper, we propose a new bidirectional path tracing formulation based on the LVC approach that handles scenes with complex, layered materials efficiently on the GPU. We achieve coherent material evaluation while conserving GPU memory requirements using sorting. We propose a modified method for selecting light vertices using the contribution importance which improves the image quality for a given amount of work. Progressive rendering can empower artists in the production pipeline to iterate and preview their work quickly. We hope the work presented here will enable the use of GPUs in the production pipeline with complex materials and complicated lighting scenarios.",
"title": ""
},
{
"docid": "400a56ea0b2c005ed16500f0d7818313",
"text": "Real estate appraisal, which is the process of estimating the price for real estate properties, is crucial for both buyers and sellers as the basis for negotiation and transaction. Traditionally, the repeat sales model has been widely adopted to estimate real estate prices. However, it depends on the design and calculation of a complex economic-related index, which is challenging to estimate accurately. Today, real estate brokers provide easy access to detailed online information on real estate properties to their clients. We are interested in estimating the real estate price from these large amounts of easily accessed data. In particular, we analyze the prediction power of online house pictures, which is one of the key factors for online users to make a potential visiting decision. The development of robust computer vision algorithms makes the analysis of visual content possible. In this paper, we employ a recurrent neural network to predict real estate prices using the state-of-the-art visual features. The experimental results indicate that our model outperforms several other state-of-the-art baseline algorithms in terms of both mean absolute error and mean absolute percentage error.",
"title": ""
},
{
"docid": "b8c59cb962a970daaf012b15bcb8413d",
"text": "Joint image filters leverage the guidance image as a prior and transfer the structural details from the guidance image to the target image for suppressing noise or enhancing spatial resolution. Existing methods either rely on various explicit filter constructions or hand-designed objective functions, thereby making it difficult to understand, improve, and accelerate these filters in a coherent framework. In this paper, we propose a learning-based approach for constructing joint filters based on Convolutional Neural Networks. In contrast to existing methods that consider only the guidance image, the proposed algorithm can selectively transfer salient structures that are consistent with both guidance and target images. We show that the model trained on a certain type of data, e.g., RGB and depth images, generalizes well to other modalities, e.g., flash/non-Flash and RGB/NIR images. We validate the effectiveness of the proposed joint filter through extensive experimental evaluations with state-of-the-art methods.",
"title": ""
},
{
"docid": "6db749b222a44764cf07bde527c230a3",
"text": "There have been many claims that the Internet represents a new “frictionless market.” Our research empirically analyzes the characteristics of the Internet as a channel for two categories of homogeneous products — books and CDs. Using a data set of over 8,500 price observations collected over a period of 15 months, we compare pricing behavior at 41 Internet and conventional retail outlets. We find that prices on the Internet are 9-16% lower than prices in conventional outlets, depending on whether taxes, shipping and shopping costs are included in the price. Additionally, we find that Internet retailers’ price adjustments over time are up to 100 times smaller than conventional retailers’ price adjustments — presumably reflecting lower menu costs in Internet channels. We also find that levels of price dispersion depend importantly on the measures employed. When we simply compare the prices posted by different Internet retailers we find substantial dispersion. Internet retailer prices differ by an average of 33% for books and 25% for CDs. However, when we weight these prices by proxies for market share, we find dispersion is lower in Internet channels than in conventional channels, reflecting the dominance of certain heavily branded retailers. We conclude that while there is lower friction in many dimensions of Internet competition, branding, awareness, and trust remain important sources of heterogeneity among Internet retailers.",
"title": ""
},
{
"docid": "83ed2dfe4456bc3cc8052747e7df7bfc",
"text": "Dietary restriction has been shown to have several health benefits including increased insulin sensitivity, stress resistance, reduced morbidity, and increased life span. The mechanism remains unknown, but the need for a long-term reduction in caloric intake to achieve these benefits has been assumed. We report that when C57BL6 mice are maintained on an intermittent fasting (alternate-day fasting) dietary-restriction regimen their overall food intake is not decreased and their body weight is maintained. Nevertheless, intermittent fasting resulted in beneficial effects that met or exceeded those of caloric restriction including reduced serum glucose and insulin levels and increased resistance of neurons in the brain to excitotoxic stress. Intermittent fasting therefore has beneficial effects on glucose regulation and neuronal resistance to injury in these mice that are independent of caloric intake.",
"title": ""
},
{
"docid": "90c6cf2fd66683843a8dd549676727d5",
"text": "Despite great progress in neuroscience, there are still fundamental unanswered questions about the brain, including the origin of subjective experience and consciousness. Some answers might rely on new physical mechanisms. Given that biophotons have been discovered in the brain, it is interesting to explore if neurons use photonic communication in addition to the well-studied electro-chemical signals. Such photonic communication in the brain would require waveguides. Here we review recent work (S. Kumar, K. Boone, J. Tuszynski, P. Barclay, and C. Simon, Scientific Reports 6, 36508 (2016)) suggesting that myelinated axons could serve as photonic waveguides. The light transmission in the myelinated axon was modeled, taking into account its realistic imperfections, and experiments were proposed both in vivo and in vitro to test this hypothesis. Potential implications for quantum biology are discussed.",
"title": ""
},
{
"docid": "21f079e590e020df08d461ba78a26d65",
"text": "The aim of this study was to develop a tool to measure the knowledge of nurses on pressure ulcer prevention. PUKAT 2·0 is a revised and updated version of the Pressure Ulcer Knowledge Assessment Tool (PUKAT) developed in 2010 at Ghent University, Belgium. The updated version was developed using state-of-the-art techniques to establish evidence concerning validity and reliability. Face and content validity were determined through a Delphi procedure including both experts from the European Pressure Ulcer Advisory Panel (EPUAP) and the National Pressure Ulcer Advisory Panel (NPUAP) (n = 15). A subsequent psychometric evaluation of 342 nurses and nursing students evaluated the item difficulty, discriminating power and quality of the response alternatives. Furthermore, construct validity was established through a test-retest procedure and the known-groups technique. The content validity was good and the difficulty level moderate. The discernment was found to be excellent: all groups with a (theoretically expected) higher level of expertise had a significantly higher score than the groups with a (theoretically expected) lower level of expertise. The stability of the tool is sufficient (Intraclass Correlation Coefficient = 0·69). The PUKAT 2·0 demonstrated good psychometric properties and can be used and disseminated internationally to assess knowledge about pressure ulcer prevention.",
"title": ""
},
{
"docid": "1e852e116c11a6c7fb1067313b1ffaa3",
"text": "Article history: Received 20 February 2013 Received in revised form 30 July 2013 Accepted 11 September 2013 Available online 21 September 2013",
"title": ""
}
] | scidocsrr |
9edda51d7574f2e83972bb4d6b033a3f | A review of methods for automatic understanding of natural language mathematical problems | [
{
"docid": "de43054eb774df93034ffc1976a932b7",
"text": "Recent experiments in programming natural language question-answering systems are reviewed to summarize the methods that have been developed for syntactic, semantic, and logical analysis of English strings. It is concluded that at least minimally effective techniques have been devised for answering questions from natural language subsets in small scale experimental systems and that a useful paradigm has evolved to guide research efforts in the field. Current approaches to semantic analysis and logical inference are seen to be effective beginnings but of questionable generality with respect either to subtle aspects of meaning or to applications over large subsets of English. Generalizing from current small-scale experiments to language-processing systems based on dictionaries with thousands of entries—with correspondingly large grammars and semantic systems—may entail a new order of complexity and require the invention and development of entirely different approaches to semantic analysis and question answering.",
"title": ""
}
] | [
{
"docid": "d9e09589352431cafb6e579faf91afa8",
"text": "The purpose of this study was to investigate the effects of training muscle groups 1 day per week using a split-body routine (SPLIT) vs. 3 days per week using a total-body routine (TOTAL) on muscular adaptations in well-trained men. Subjects were 20 male volunteers (height = 1.76 ± 0.05 m; body mass = 78.0 ± 10.7 kg; age = 23.5 ± 2.9 years) recruited from a university population. Participants were pair matched according to baseline strength and then randomly assigned to 1 of the 2 experimental groups: a SPLIT, where multiple exercises were performed for a specific muscle group in a session with 2-3 muscle groups trained per session (n = 10) or a TOTAL, where 1 exercise was performed per muscle group in a session with all muscle groups trained in each session (n = 10). Subjects were tested pre- and poststudy for 1 repetition maximum strength in the bench press and squat, and muscle thickness (MT) of forearm flexors, forearm extensors, and vastus lateralis. Results showed significantly greater increases in forearm flexor MT for TOTAL compared with SPLIT. No significant differences were noted in maximal strength measures. The findings suggest a potentially superior hypertrophic benefit to higher weekly resistance training frequencies.",
"title": ""
},
{
"docid": "166b16222ecc15048972e535dbf4cb38",
"text": "Fingerprint matching systems generally use four types of representation schemes: grayscale image, phase image, skeleton image, and minutiae, among which minutiae-based representation is the most widely adopted one. The compactness of minutiae representation has created an impression that the minutiae template does not contain sufficient information to allow the reconstruction of the original grayscale fingerprint image. This belief has now been shown to be false; several algorithms have been proposed that can reconstruct fingerprint images from minutiae templates. These techniques try to either reconstruct the skeleton image, which is then converted into the grayscale image, or reconstruct the grayscale image directly from the minutiae template. However, they have a common drawback: Many spurious minutiae not included in the original minutiae template are generated in the reconstructed image. Moreover, some of these reconstruction techniques can only generate a partial fingerprint. In this paper, a novel fingerprint reconstruction algorithm is proposed to reconstruct the phase image, which is then converted into the grayscale image. The proposed reconstruction algorithm not only gives the whole fingerprint, but the reconstructed fingerprint contains very few spurious minutiae. Specifically, a fingerprint image is represented as a phase image which consists of the continuous phase and the spiral phase (which corresponds to minutiae). An algorithm is proposed to reconstruct the continuous phase from minutiae. The proposed reconstruction algorithm has been evaluated with respect to the success rates of type-I attack (match the reconstructed fingerprint against the original fingerprint) and type-II attack (match the reconstructed fingerprint against different impressions of the original fingerprint) using a commercial fingerprint recognition system. Given the reconstructed image from our algorithm, we show that both types of attacks can be successfully launched against a fingerprint recognition system.",
"title": ""
},
{
"docid": "0ff76204fcdf1a7cf2a6d13a5d3b1597",
"text": "In this study, we found that the optimum take-off angle for a long jumper may be predicted by combining the equation for the range of a projectile in free flight with the measured relations between take-off speed, take-off height and take-off angle for the athlete. The prediction method was evaluated using video measurements of three experienced male long jumpers who performed maximum-effort jumps over a wide range of take-off angles. To produce low take-off angles the athletes used a long and fast run-up, whereas higher take-off angles were produced using a progressively shorter and slower run-up. For all three athletes, the take-off speed decreased and the take-off height increased as the athlete jumped with a higher take-off angle. The calculated optimum take-off angles were in good agreement with the athletes' competition take-off angles.",
"title": ""
},
{
"docid": "a7bd7a5b7d79ce8c5691abfdcecfeec7",
"text": "We consider the problems of learning forward models that map state to high-dimensional images and inverse models that map high-dimensional images to state in robotics. Specifically, we present a perceptual model for generating video frames from state with deep networks, and provide a framework for its use in tracking and prediction tasks. We show that our proposed model greatly outperforms standard deconvolutional methods and GANs for image generation, producing clear, photo-realistic images. We also develop a convolutional neural network model for state estimation and compare the result to an Extended Kalman Filter to estimate robot trajectories. We validate all models on a real robotic system.",
"title": ""
},
{
"docid": "82a3fe6dfa81e425eb3aa3404799e72d",
"text": "ABSTRACT: Nonlinear control problem for a missile autopilot is quick adaptation and minimizing the desired acceleration to missile nonlinear model. For this several missile controllers are provided which are on the basis of nonlinear control or design of linear control for the linear missile system. In this paper a linear control of dynamic matrix type is proposed for the linear model of missile. In the first section, an approximate two degrees of freedom missile model, known as Horton model, is introduced. Then, the nonlinear model is converted into observable and controllable model base on the feedback linear rule of input-state mode type. Finally for design of control model, the dynamic matrix flight control, which is one of the linear predictive control design methods on the basis of system step response information, is used. This controller is a recursive method which calculates the development of system input by definition and optimization of a cost function and using system dynamic matrix. So based on the applied inputs and previous output information, the missile acceleration would be calculated. Unlike other controllers, this controller doesn’t require an interaction effect and accurate model. Although, it has predicting and controlling horizon, there isn’t such horizons in non-predictive methods.",
"title": ""
},
{
"docid": "a984a54369a1db6a0165a96695c94de5",
"text": "IT projects have certain features that make them different from other engineering projects. These include increased complexity and higher chances of project failure. To increase the chances of an IT project to be perceived as successful by all the parties involved in the project from its conception, development and implementation, it is necessary to identify at the outset of the project what the important factors influencing that success are. Current methodologies and tools used for identifying, classifying and evaluating the indicators of success in IT projects have several limitations that can be overcome by employing the new methodology presented in this paper. This methodology is based on using Fuzzy Cognitive Maps (FCM) for mapping success, modelling Critical Success Factors (CSFs) perceptions and the relations between them. This is an area where FCM has never been applied before. The applicability of the FCM methodology is demonstrated through a case study based on a new project idea, the Mobile Payment System (MPS) Project, related to the fast evolving world of mobile telecommunications.",
"title": ""
},
{
"docid": "bbe59dd74c554d92167f42701a1f8c3d",
"text": "Finding subgraph isomorphisms is an important problem in many applications which deal with data modeled as graphs. While this problem is NP-hard, in recent years, many algorithms have been proposed to solve it in a reasonable time for real datasets using different join orders, pruning rules, and auxiliary neighborhood information. However, since they have not been empirically compared one another in most research work, it is not clear whether the later work outperforms the earlier work. Another problem is that reported comparisons were often done using the original authors’ binaries which were written in different programming environments. In this paper, we address these serious problems by re-implementing five state-of-the-art subgraph isomorphism algorithms in a common code base and by comparing them using many real-world datasets and their query loads. Through our in-depth analysis of experimental results, we report surprising empirical findings.",
"title": ""
},
{
"docid": "627d938cf2194cd0cab09f36a0bd50a9",
"text": "This chapter focuses on the why, what, and how of bodily expression analysis for automatic affect recognition. It first asks the question of ‘why bodily expression?’ and attempts to find answers by reviewing the latest bodily expression perception literature. The chapter then turns its attention to the question of ‘what are the bodily expressions recognized automatically?’ by providing an overview of the automatic bodily expression recognition literature. The chapter then provides representative answers to how bodily expression analysis can aid affect recognition by describing three case studies: (1) data acquisition and annotation of the first publicly available database of affective face-and-body displays (i.e., the FABO database); (2) a representative approach for affective state recognition from face-and-body display by detecting the space-time interest points in video and using Canonical Correlation Analysis (CCA) for fusion, and (3) a representative approach for explicit detection of the temporal phases (segments) of affective states (start/end of the expression and its subdivision into phases such as neutral, onset, apex, and offset) from bodily expressions. The chapter concludes by summarizing the main challenges faced and discussing how we can advance the state of the art in the field.",
"title": ""
},
{
"docid": "e9aea5919d3d38184fc13c10f1751293",
"text": "The distinct protein aggregates that are found in Alzheimer's, Parkinson's, Huntington's and prion diseases seem to cause these disorders. Small intermediates — soluble oligomers — in the aggregation process can confer synaptic dysfunction, whereas large, insoluble deposits might function as reservoirs of the bioactive oligomers. These emerging concepts are exemplified by Alzheimer's disease, in which amyloid β-protein oligomers adversely affect synaptic structure and plasticity. Findings in other neurodegenerative diseases indicate that a broadly similar process of neuronal dysfunction is induced by diffusible oligomers of misfolded proteins.",
"title": ""
},
{
"docid": "a271371ba28be10b67e31ecca6f3aa88",
"text": "The toxicity and repellency of the bioactive chemicals of clove (Syzygium aromaticum) powder, eugenol, eugenol acetate, and beta-caryophyllene were evaluated against workers of the red imported fire ant, Solenopsis invicta Buren. Clove powder applied at 3 and 12 mg/cm2 provided 100% ant mortality within 6 h, and repelled 99% within 3 h. Eugenol was the fastest acting compound against red imported fire ant compared with eugenol acetate, beta-caryophyllene, and clove oil. The LT50 values inclined exponentially with the increase in the application rate of the chemical compounds tested. However, repellency did not increase with the increase in the application rate of the chemical compounds tested, but did with the increase in exposure time. Eugenol, eugenol acetate, as well as beta-caryophyllene and clove oil may provide another tool for red imported fire ant integrated pest management, particularly in situations where conventional insecticides are inappropriate.",
"title": ""
},
{
"docid": "88f60c6835fed23e12c56fba618ff931",
"text": "Design of fault tolerant systems is a popular subject in flight control system design. In particular, adaptive control approach has been successful in recovering aircraft in a wide variety of different actuator/sensor failure scenarios. However, if the aircraft goes under a severe actuator failure, control system might not be able to adapt fast enough to changes in the dynamics, which would result in performance degradation or even loss of the aircraft. Inspired by the recent success of deep learning applications, this work builds a hybrid recurren-t/convolutional neural network model to estimate adaptation parameters for aircraft dynamics under actuator/engine faults. The model is trained offline from a database of different failure scenarios. In case of an actuator/engine failure, the model identifies adaptation parameters and feeds this information to the adaptive control system, which results in significantly faster convergence of the controller coefficients. Developed control system is implemented on a nonlinear 6-DOF F-16 aircraft, and the results show that the proposed architecture is especially beneficial in severe failure scenarios.",
"title": ""
},
{
"docid": "c0a05cad5021b1e779682b50a53f25fd",
"text": "We initiate the formal study of functional encryption by giving precise definitions of the concept and its security. Roughly speaking, functional encryption supports restricted secret keys that enable a key holder to learn a specific function of encrypted data, but learn nothing else about the data. For example, given an encrypted program the secret key may enable the key holder to learn the output of the program on a specific input without learning anything else about the program. We show that defining security for functional encryption is non-trivial. First, we show that a natural game-based definition is inadequate for some functionalities. We then present a natural simulation-based definition and show that it (provably) cannot be satisfied in the standard model, but can be satisfied in the random oracle model. We show how to map many existing concepts to our formalization of functional encryption and conclude with several interesting open problems in this young area. ∗Supported by NSF, MURI, and the Packard foundation. †Supported by NSF CNS-0716199, CNS-0915361, and CNS-0952692, Air Force Office of Scientific Research (AFO SR) under the MURI award for “Collaborative policies and assured information sharing” (Project PRESIDIO), Department of Homeland Security Grant 2006-CS-001-000001-02 (subaward 641), and the Alfred P. Sloan Foundation.",
"title": ""
},
{
"docid": "240c47d27533069f339d8eb090a637a9",
"text": "This paper discusses the active and reactive power control method for a modular multilevel converter (MMC) based grid-connected PV system. The voltage vector space analysis is performed by using average value models for the feasibility analysis of reactive power compensation (RPC). The proposed double-loop control strategy enables the PV system to handle unidirectional active power flow and bidirectional reactive power flow. Experiments have been performed on a laboratory-scaled modular multilevel PV inverter. The experimental results verify the correctness and feasibility of the proposed strategy.",
"title": ""
},
{
"docid": "3ae5e7ac5433f2449cd893e49f1b2553",
"text": "We propose a category-independent method to produce a bag of regions and rank them, such that top-ranked regions are likely to be good segmentations of different objects. Our key objectives are completeness and diversity: Every object should have at least one good proposed region, and a diverse set should be top-ranked. Our approach is to generate a set of segmentations by performing graph cuts based on a seed region and a learned affinity function. Then, the regions are ranked using structured learning based on various cues. Our experiments on the Berkeley Segmentation Data Set and Pascal VOC 2011 demonstrate our ability to find most objects within a small bag of proposed regions.",
"title": ""
},
{
"docid": "1dfa61f341919dcb4169c167a92c2f43",
"text": "This paper presents an algorithm for the detection of micro-crack defects in the multicrystalline solar cells. This detection goal is very challenging due to the presence of various types of image anomalies like dislocation clusters, grain boundaries, and other artifacts due to the spurious discontinuities in the gray levels. In this work, an algorithm featuring an improved anisotropic diffusion filter and advanced image segmentation technique is proposed. The methods and procedures are assessed using 600 electroluminescence images, comprising 313 intact and 287 defected samples. Results indicate that the methods and procedures can accurately detect micro-crack in solar cells with sensitivity, specificity, and accuracy averaging at 97%, 80%, and 88%, respectively.",
"title": ""
},
{
"docid": "0c842ef34f1924e899e408309f306640",
"text": "A single-tube 5' nuclease multiplex PCR assay was developed on the ABI 7700 Sequence Detection System (TaqMan) for the detection of Neisseria meningitidis, Haemophilus influenzae, and Streptococcus pneumoniae from clinical samples of cerebrospinal fluid (CSF), plasma, serum, and whole blood. Capsular transport (ctrA), capsulation (bexA), and pneumolysin (ply) gene targets specific for N. meningitidis, H. influenzae, and S. pneumoniae, respectively, were selected. Using sequence-specific fluorescent-dye-labeled probes and continuous real-time monitoring, accumulation of amplified product was measured. Sensitivity was assessed using clinical samples (CSF, serum, plasma, and whole blood) from culture-confirmed cases for the three organisms. The respective sensitivities (as percentages) for N. meningitidis, H. influenzae, and S. pneumoniae were 88.4, 100, and 91.8. The primer sets were 100% specific for the selected culture isolates. The ctrA primers amplified meningococcal serogroups A, B, C, 29E, W135, X, Y, and Z; the ply primers amplified pneumococcal serotypes 1, 2, 3, 4, 5, 6, 7, 8, 9, 10A, 11A, 12, 14, 15B, 17F, 18C, 19, 20, 22, 23, 24, 31, and 33; and the bexA primers amplified H. influenzae types b and c. Coamplification of two target genes without a loss of sensitivity was demonstrated. The multiplex assay was then used to test a large number (n = 4,113) of culture-negative samples for the three pathogens. Cases of meningococcal, H. influenzae, and pneumococcal disease that had not previously been confirmed by culture were identified with this assay. The ctrA primer set used in the multiplex PCR was found to be more sensitive (P < 0.0001) than the ctrA primers that had been used for meningococcal PCR testing at that time.",
"title": ""
},
{
"docid": "c699ce2a06276f722bf91806378b11eb",
"text": "The success of deep neural networks (DNNs) is heavily dependent on the availability of labeled data. However, obtaining labeled data is a big challenge in many real-world problems. In such scenarios, a DNN model can leverage labeled and unlabeled data from a related domain, but it has to deal with the shift in data distributions between the source and the target domains. In this paper, we study the problem of classifying social media posts during a crisis event (e.g., Earthquake). For that, we use labeled and unlabeled data from past similar events (e.g., Flood) and unlabeled data for the current event. We propose a novel model that performs adversarial learning based domain adaptation to deal with distribution drifts and graph based semi-supervised learning to leverage unlabeled data within a single unified deep learning framework. Our experiments with two real-world crisis datasets collected from Twitter demonstrate significant improvements over several baselines.",
"title": ""
},
{
"docid": "0ac7db546c11b9d18897ceeb2e5be70f",
"text": "A backstepping approach is proposed in this paper to cope with the failure of a quadrotor propeller. The presented methodology supposes to turn off also the motor which is opposite to the broken one. In this way, a birotor configuration with fixed propellers is achieved. The birotor is controlled to follow a planned emergency landing trajectory. Theory shows that the birotor can reach any point in the Cartesian space losing the possibility to control the yaw angle. Simulation tests are employed to validate the proposed controller design.",
"title": ""
},
{
"docid": "0521fe73626d12a3962934cf2b2ee2e9",
"text": "General as well as the MSW management in Thailand is reviewed in this paper. Topics include the MSW generation, sources, composition, and trends. The review, then, moves to sustainable solutions for MSW management, sustainable alternative approaches with an emphasis on an integrated MSW management. Information of waste in Thailand is also given at the beginning of this paper for better understanding of later contents. It is clear that no one single method of MSW disposal can deal with all materials in an environmentally sustainable way. As such, a suitable approach in MSW management should be an integrated approach that could deliver both environmental and economic sustainability. With increasing environmental concerns, the integrated MSW management system has a potential to maximize the useable waste materials as well as produce energy as a by-product. In Thailand, the compositions of waste (86%) are mainly organic waste, paper, plastic, glass, and metal. As a result, the waste in Thailand is suitable for an integrated MSW management. Currently, the Thai national waste management policy starts to encourage the local administrations to gather into clusters to establish central MSW disposal facilities with suitable technologies and reducing the disposal cost based on the amount of MSW generated. Keywords— MSW, management, sustainable, Thailand",
"title": ""
},
{
"docid": "70b900d196f689caf9c3051cc27792ae",
"text": "This paper describes the hardware and software design of the kidsize humanoid robot systems of the Darmstadt Dribblers in 2007. The robots are used as a vehicle for research in control of locomotion and behavior of autonomous humanoid robots and robot teams with many degrees of freedom and many actuated joints. The Humanoid League of RoboCup provides an ideal testbed for such aspects of dynamics in motion and autonomous behavior as the problem of generating and maintaining statically or dynamically stable bipedal locomotion is predominant for all types of vision guided motions during a soccer game. A modular software architecture as well as further technologies have been developed for efficient and effective implementation and test of modules for sensing, planning, behavior, and actions of humanoid robots.",
"title": ""
}
] | scidocsrr |
886e646e5ea0c0497984ecd7cb60ff9b | Sequence Discriminative Training for Offline Handwriting Recognition by an Interpolated CTC and Lattice-Free MMI Objective Function | [
{
"docid": "6dfc558d273ec99ffa7dc638912d272c",
"text": "Recurrent neural networks (RNNs) with Long Short-Term memory cells currently hold the best known results in unconstrained handwriting recognition. We show that their performance can be greatly improved using dropout - a recently proposed regularization method for deep architectures. While previous works showed that dropout gave superior performance in the context of convolutional networks, it had never been applied to RNNs. In our approach, dropout is carefully used in the network so that it does not affect the recurrent connections, hence the power of RNNs in modeling sequences is preserved. Extensive experiments on a broad range of handwritten databases confirm the effectiveness of dropout on deep architectures even when the network mainly consists of recurrent and shared connections.",
"title": ""
},
{
"docid": "7f7a67af972d26746ce1ae0c7ec09499",
"text": "We describe Microsoft's conversational speech recognition system, in which we combine recent developments in neural-network-based acoustic and language modeling to advance the state of the art on the Switchboard recognition task. Inspired by machine learning ensemble techniques, the system uses a range of convolutional and recurrent neural networks. I-vector modeling and lattice-free MMI training provide significant gains for all acoustic model architectures. Language model rescoring with multiple forward and backward running RNNLMs, and word posterior-based system combination provide a 20% boost. The best single system uses a ResNet architecture acoustic model with RNNLM rescoring, and achieves a word error rate of 6.9% on the NIST 2000 Switchboard task. The combined system has an error rate of 6.2%, representing an improvement over previously reported results on this benchmark task.",
"title": ""
}
] | [
{
"docid": "bedc7de2ede206905e89daf61828f868",
"text": "Spectral graph partitioning provides a powerful approach to image segmentation. We introduce an alternate idea that finds partitions with a small isoperimetric constant, requiring solution to a linear system rather than an eigenvector problem. This approach produces the high quality segmentations of spectral methods, but with improved speed and stability.",
"title": ""
},
{
"docid": "2923d1776422a1f44395f169f0d61995",
"text": "Rolling upgrade consists of upgrading progressively the servers of a distributed system to reduce service downtime.Upgrading a subset of servers requires a well-engineered cluster membership protocol to maintain, in the meantime, the availability of the system state. Existing cluster membership reconfigurations, like CoreOS etcd, rely on a primary not only for reconfiguration but also for storing information. At any moment, there can be at most one primary, whose replacement induces disruption. We propose Rollup, a non-disruptive rolling upgrade protocol with a fast consensus-based reconfiguration. Rollup relies on a candidate leader only for the reconfiguration and scalable biquorums for service requests. While Rollup implements a non-disruptive cluster membership protocol, it does not offer a full-fledged coordination service. We analyzed Rollup theoretically and experimentally on an isolated network of 26 physical machines and an Amazon EC2 cluster of 59 virtual machines. Our results show an 8-fold speedup compared to a rolling upgrade based on a primary for reconfiguration.",
"title": ""
},
{
"docid": "4d6ca3875418dedcd0b71bc13b1a529d",
"text": "Leadership is one of the most discussed and important topics in the social sciences especially in organizational theory and management. Generally leadership is the process of influencing group activities towards the achievement of goals. A lot of researches have been conducted in this area .some researchers investigated individual characteristics such as demographics, skills and abilities, and personality traits, predict leadership effectiveness. Different theories, leadership styles and models have been propounded to provide explanations on the leadership phenomenon and to help leaders influence their followers towards achieving organizational goals. Today with the change in organizations and business environment the leadership styles and theories have been changed. In this paper, we review the new leadership theories and styles that are new emerging and are according to the need of the organizations. Leadership styles and theories have been investigated to get the deep understanding of the new trends and theories of the leadership to help the managers and organizations choose appropriate style of leadership. key words: new emerging styles, new theories, leadership, organization",
"title": ""
},
{
"docid": "e8eba986ab77d519ce8808b3d33b2032",
"text": "In this paper, an implementation of an extended target tracking filter using measurements from high-resolution automotive Radio Detection and Ranging (RADAR) is proposed. Our algorithm uses the Cartesian point measurements from the target's contour as well as the Doppler range rate provided by the RADAR to track a target vehicle's position, orientation, and translational and rotational velocities. We also apply a Gaussian Process (GP) to model the vehicle's shape. To cope with the nonlinear measurement equation, we implement an Extended Kalman Filter (EKF) and provide the necessary derivatives for the Doppler measurement. We then evaluate the effectiveness of incorporating the Doppler rate on simulations and on 2 sets of real data.",
"title": ""
},
{
"docid": "fe0c8969c666b6074d2bc5cc49546b78",
"text": "We propose a novel adversarial multi-task learning scheme, aiming at actively curtailing the inter-talker feature variability while maximizing its senone discriminability so as to enhance the performance of a deep neural network (DNN) based ASR system. We call the scheme speaker-invariant training (SIT). In SIT, a DNN acoustic model and a speaker classifier network are jointly optimized to minimize the senone (tied triphone state) classification loss, and simultaneously mini-maximize the speaker classification loss. A speaker-invariant and senone-discriminative deep feature is learned through this adversarial multi-task learning. With SIT, a canonical DNN acoustic model with significantly reduced variance in its output probabilities is learned with no explicit speaker-independent (SI) transformations or speaker-specific representations used in training or testing. Evaluated on the CHiME-3 dataset, the SIT achieves 4.99% relative word error rate (WER) improvement over the conventional SI acoustic model. With additional unsupervised speaker adaptation, the speaker-adapted (SA) SIT model achieves 4.86% relative WER gain over the SA SI acoustic model.",
"title": ""
},
{
"docid": "61c6d49c3cdafe4366d231ebad676077",
"text": "Video affective content analysis has been an active research area in recent decades, since emotion is an important component in the classification and retrieval of videos. Video affective content analysis can be divided into two approaches: direct and implicit. Direct approaches infer the affective content of videos directly from related audiovisual features. Implicit approaches, on the other hand, detect affective content from videos based on an automatic analysis of a user's spontaneous response while consuming the videos. This paper first proposes a general framework for video affective content analysis, which includes video content, emotional descriptors, and users' spontaneous nonverbal responses, as well as the relationships between the three. Then, we survey current research in both direct and implicit video affective content analysis, with a focus on direct video affective content analysis. Lastly, we identify several challenges in this field and put forward recommendations for future research.",
"title": ""
},
{
"docid": "97ca52a74f6984cda706b54830c58fd8",
"text": "In this paper, we study a novel approach for named entity recognition (NER) and mention detection in natural language processing. Instead of treating NER as a sequence labelling problem, we propose a new local detection approach, which rely on the recent fixed-size ordinally forgetting encoding (FOFE) method to fully encode each sentence fragment and its left/right contexts into a fixed-size representation. Afterwards, a simple feedforward neural network is used to reject or predict entity label for each individual fragment. The proposed method has been evaluated in several popular NER and mention detection tasks, including the CoNLL 2003 NER task and TAC-KBP2015 and TAC-KBP2016 Trilingual Entity Discovery and Linking (EDL) tasks. Our methods have yielded pretty strong performance in all of these examined tasks. This local detection approach has shown many advantages over the traditional sequence labelling methods.",
"title": ""
},
{
"docid": "8eeb8fba948b37b4e9489c472cb1506a",
"text": "Total Quality Management (TQM) has become, according to one source, 'as pervasive a part of business thinking as quarterly financial results,' and yet TQM's role as a strategic resource remains virtually unexamined in strategic management research. Drawing on the resource approach and other theoretical perspectives, this article examines TQM as a potential source of sustainable competitive advantage, reviews existing empirical evidence, and reports findings from a new empirical study of TQM's performance consequences. The findings suggest that most features generally associated with TQM—such as quality training, process improvement, and benchmarking—do not generally produce advantage, but that certain tacit, behavioral, imperfectly imitable features—such as open culture, employee empowerment, and executive commitment—can produce advantage. The author concludes that these tacit resources, and not TQM tools and techniques, drive TQM success, and that organizations that acquire them can outperform competitors with or without the accompanying TQM ideology.",
"title": ""
},
{
"docid": "dbc57902c0655f1bdb3f7dbdcdb6fd5c",
"text": "In this paper, a progressive learning technique for multi-class classification is proposed. This newly developed learning technique is independent of the number of class constraints and it can learn new classes while still retaining the knowledge of previous classes. Whenever a new class (non-native to the knowledge learnt thus far) is encountered, the neural network structure gets remodeled automatically by facilitating new neurons and interconnections, and the parameters are calculated in such a way that it retains the knowledge learnt thus far. This technique is suitable for realworld applications where the number of classes is often unknown and online learning from real-time data is required. The consistency and the complexity of the progressive learning technique are analyzed. Several standard datasets are used to evaluate the performance of the developed technique. A comparative study shows that the developed technique is superior. Key Words—Classification, machine learning, multi-class, sequential learning, progressive learning.",
"title": ""
},
{
"docid": "1f4e15c44b4700598701667fa5baaaef",
"text": "We present the new HippoCampus micro underwater vehicle, first introduced in [1]. It is designed for monitoring confined fluid volumes. These tightly constrained settings demand agile vehicle dynamics. Moreover, we adapt a robust attitude control scheme for aerial drones to the underwater domain. We demonstrate the performance of the controller with a challenging maneuver. A submerged Furuta pendulum is stabilized by HippoCampus after a swing-up. The experimental results reveal the robustness of the control method, as the system quickly recovers from strong physical disturbances, which are applied to the system.",
"title": ""
},
{
"docid": "fb79df27fa2a5b1af8d292af8d53af6e",
"text": "This paper presents a proportional integral derivative (PID) controller with a derivative filter coefficient to control a twin rotor multiple input multiple output system (TRMS), which is a nonlinear system with two degrees of freedom and cross couplings. The mathematical modeling of TRMS is done using MATLAB/Simulink. The simulation results are compared with the results of conventional PID controller. The results of proposed PID controller with derivative filter shows better transient and steady state response as compared to conventional PID controller.",
"title": ""
},
{
"docid": "c6bdd8d88dd2f878ddc6f2e8be39aa78",
"text": "A wide variety of non-photorealistic rendering techniques make use of random variation in the placement or appearance of primitives. In order to avoid the \"shower-door\" effect, this random variation should move with the objects in the scene. Here we present coherent noise tailored to this purpose. We compute the coherent noise with a specialized filter that uses the depth and velocity fields of a source sequence. The computation is fast and suitable for interactive applications like games.",
"title": ""
},
{
"docid": "2737e9e01f00db8fa568ae1fe5881a5e",
"text": "Resonant converters which use a small DC bus capacitor to achieve high power factor are desirable for low cost Inductive Power Transfer (IPT) applications but produce amplitude modulated waveforms which are then present on any coupled load. The modulated coupled voltage produces pulse currents which could be used for battery charging purposes. In order to understand the effects of such pulse charging, two Lithium Iron Phosphate (LiFePO4) batteries underwent 2000 cycles of charge and discharging cycling utilizing both pulse and DC charging profiles. The cycling results show that such pulse charging is comparable to conventional DC charging and may be suitable for low cost battery charging applications without impacting battery life.",
"title": ""
},
{
"docid": "e37805ea3c4e25ab49dc4f7992d8e7c6",
"text": "Curriculum learning (CL) or self-paced learning (SPL) represents a recently proposed learning regime inspired by the learning process of humans and animals that gradually proceeds from easy to more complex samples in training. The two methods share a similar conceptual learning paradigm, but differ in specific learning schemes. In CL, the curriculum is predetermined by prior knowledge, and remain fixed thereafter. Therefore, this type of method heavily relies on the quality of prior knowledge while ignoring feedback about the learner. In SPL, the curriculum is dynamically determined to adjust to the learning pace of the leaner. However, SPL is unable to deal with prior knowledge, rendering it prone to overfitting. In this paper, we discover the missing link between CL and SPL, and propose a unified framework named self-paced curriculum leaning (SPCL). SPCL is formulated as a concise optimization problem that takes into account both prior knowledge known before training and the learning progress during training. In comparison to human education, SPCL is analogous to “instructor-student-collaborative” learning mode, as opposed to “instructor-driven” in CL or “student-driven” in SPL. Empirically, we show that the advantage of SPCL on two tasks. Curriculum learning (Bengio et al. 2009) and self-paced learning (Kumar, Packer, and Koller 2010) have been attracting increasing attention in the field of machine learning and artificial intelligence. Both the learning paradigms are inspired by the learning principle underlying the cognitive process of humans and animals, which generally start with learning easier aspects of a task, and then gradually take more complex examples into consideration. The intuition can be explained in analogous to human education in which a pupil is supposed to understand elementary algebra before he or she can learn more advanced algebra topics. This learning paradigm has been empirically demonstrated to be instrumental in avoiding bad local minima and in achieving a better generalization result (Khan, Zhu, and Mutlu 2011; Basu and Christensen 2013; Tang et al. 2012). A curriculum determines a sequence of training samples which essentially corresponds to a list of samples ranked in ascending order of learning difficulty. A major disparity Copyright c © 2015, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved. between curriculum learning (CL) and self-paced learning (SPL) lies in the derivation of the curriculum. In CL, the curriculum is assumed to be given by an oracle beforehand, and remains fixed thereafter. In SPL, the curriculum is dynamically generated by the learner itself, according to what the learner has already learned. The advantage of CL includes the flexibility to incorporate prior knowledge from various sources. Its drawback stems from the fact that the curriculum design is determined independently of the subsequent learning, which may result in inconsistency between the fixed curriculum and the dynamically learned models. From the optimization perspective, since the learning proceeds iteratively, there is no guarantee that the predetermined curriculum can even lead to a converged solution. SPL, on the other hand, formulates the learning problem as a concise biconvex problem, where the curriculum design is embedded and jointly learned with model parameters. Therefore, the learned model is consistent. However, SPL is limited in incorporating prior knowledge into learning, rendering it prone to overfitting. Ignoring prior knowledge is less reasonable when reliable prior information is available. Since both methods have their advantages, it is difficult to judge which one is better in practice. In this paper, we discover the missing link between CL and SPL. We formally propose a unified framework called Self-paced Curriculum Leaning (SPCL). SPCL represents a general learning paradigm that combines the merits from both the CL and SPL. On one hand, it inherits and further generalizes the theory of SPL. On the other hand, SPCL addresses the drawback of SPL by introducing a flexible way to incorporate prior knowledge. This paper also discusses concrete implementations within the proposed framework, which can be useful for solving various problems. This paper offers a compelling insight on the relationship between the existing CL and SPL methods. Their relation can be intuitively explained in the context of human education, in which SPCL represents an “instructor-student collaborative” learning paradigm, as opposed to “instructordriven” in CL or “student-driven” in SPL. In SPCL, instructors provide prior knowledge on a weak learning sequence of samples, while leaving students the freedom to decide the actual curriculum according to their learning pace. Since an optimal curriculum for the instructor may not necessarily be optimal for all students, we hypothesize that given reasonable prior knowledge, the curriculum devised by instructors and students together can be expected to be better than the curriculum designed by either part alone. Empirically, we substantiate this hypothesis by demonstrating that the proposed method outperforms both CL and SPL on two tasks. The rest of the paper is organized as follows. We first briefly introduce the background knowledge on CL and SPL. Then we propose the model and the algorithm of SPCL. After that, we discuss concrete implementations of SPCL. The experimental results and conclusions are presented in the last two sections. Background Knowledge",
"title": ""
},
{
"docid": "0dfcb525fe5dd00032e7826a76a290e7",
"text": "In this study, we tried to find a solution for inpainting problem using deep convolutional autoencoders. A new training approach has been proposed as an alternative to the Generative Adversarial Networks. The neural network that designed for inpainting takes an image, which the certain part of its center is extracted, as an input then it attempts to fill the blank region. During the training phase, a distinct deep convolutional neural network is used and it is called Advisor Network. We show that the features extracted from intermediate layers of the Advisor Network, which is trained on a different dataset for classification, improves the performance of the autoencoder.",
"title": ""
},
{
"docid": "eef278400e3526a90e144662aab9af12",
"text": "BACKGROUND\nMango is a highly perishable seasonal fruit and large quantities are wasted during the peak season as a result of poor postharvest handling procedures. Processing surplus mango fruits into flour to be used as a functional ingredient appears to be a good preservation method to ensure its extended consumption.\n\n\nRESULTS\nIn the present study, the chemical composition, bioactive/antioxidant compounds and functional properties of green and ripe mango (Mangifera indica var. Chokanan) peel and pulp flours were evaluated. Compared to commercial wheat flour, mango flours were significantly low in moisture and protein, but were high in crude fiber, fat and ash content. Mango flour showed a balance between soluble and insoluble dietary fiber proportions, with total dietary fiber content ranging from 3.2 to 5.94 g kg⁻¹. Mango flours exhibited high values for bioactive/antioxidant compounds compared to wheat flour. The water absorption capacity and oil absorption capacity of mango flours ranged from 0.36 to 0.87 g kg⁻¹ and from 0.18 to 0.22 g kg⁻¹, respectively.\n\n\nCONCLUSION\nResults of this study showed mango peel flour to be a rich source of dietary fiber with good antioxidant and functional properties, which could be a useful ingredient for new functional food formulations.",
"title": ""
},
{
"docid": "754115ea561f99d9d185e90b7a67acb3",
"text": "The danger of SQL injections has been known for more than a decade but injection attacks have led the OWASP top 10 for years and still are one of the major reasons for devastating attacks on web sites. As about 24% percent of the top 10 million web sites are built upon the content management system WordPress, it's no surprise that content management systems in general and WordPress in particular are frequently targeted. To understand how the underlying security bugs can be discovered and exploited by attackers, 199 publicly disclosed SQL injection exploits for WordPress and its plugins have been analyzed. The steps an attacker would take to uncover and utilize these bugs are followed in order to gain access to the underlying database through automated, dynamic vulnerability scanning with well-known, freely available tools. Previous studies have shown that the majority of the security bugs are caused by the same programming errors as 10 years ago and state that the complexity of finding and exploiting them has not increased significantly. Furthermore, they claim that although the complexity has not increased, automated tools still do not detect the majority of bugs. The results of this paper show that tools for automated, dynamic vulnerability scanning only play a subordinate role for developing exploits. The reason for this is that only a small percentage of attack vectors can be found during the detection phase. So even if the complexity of exploiting an attack vector has not increased, this attack vector has to be found in the first place, which is the major challenge for this kind of tools. Therefore, from today's perspective, a combination with manual and/or static analysis is essential when testing for security vulnerabilities.",
"title": ""
},
{
"docid": "7b6c039783091260cee03704ce9748d8",
"text": "We describe Algorithm 2 in detail. Algorithm 2 takes as input the sample set S, the query sequence F , the sensitivity of query ∆, the threshold τ , and the stop parameter s. Algorithm 2 outputs the result of each comparison with the threshold. In Algorithm 2, each noisy query output is compred with a noisy threshold at line 4 and outputs the result of comparison. Let ⊤ mean that fk(S) > τ . Algorithm 2 is terminated if outputs ⊤ s times.",
"title": ""
},
{
"docid": "941d7a7a59261fe2463f42cad9cff004",
"text": "Dragon's blood is one of the renowned traditional medicines used in different cultures of world. It has got several therapeutic uses: haemostatic, antidiarrhetic, antiulcer, antimicrobial, antiviral, wound healing, antitumor, anti-inflammatory, antioxidant, etc. Besides these medicinal applications, it is used as a coloring material, varnish and also has got applications in folk magic. These red saps and resins are derived from a number of disparate taxa. Despite its wide uses, little research has been done to know about its true source, quality control and clinical applications. In this review, we have tried to overview different sources of Dragon's blood, its source wise chemical constituents and therapeutic uses. As well as, a little attempt has been done to review the techniques used for its quality control and safety.",
"title": ""
},
{
"docid": "72c79181572c836cb92aac8fe7a14c5d",
"text": "When automatic plagiarism detection is carried out considering a reference corpus, a suspicious text is compared to a set of original documents in order to relate the plagiarised text fragments to their potential source. One of the biggest difficulties in this task is to locate plagiarised fragments that have been modified (by rewording, insertion or deletion, for example) from the source text. The definition of proper text chunks as comparison units of the suspicious and original texts is crucial for the success of this kind of applications. Our experiments with the METER corpus show that the best results are obtained when considering low level word n-grams comparisons (n = {2, 3}).",
"title": ""
}
] | scidocsrr |
d47312497b8018730d33a0545a46c4fa | Animated narrative visualization for video clickstream data | [
{
"docid": "5f04fcacc0dd325a1cd3ba5a846fe03f",
"text": "Web clickstream data are routinely collected to study how users browse the web or use a service. It is clear that the ability to recognize and summarize user behavior patterns from such data is valuable to e-commerce companies. In this paper, we introduce a visual analytics system to explore the various user behavior patterns reflected by distinct clickstream clusters. In a practical analysis scenario, the system first presents an overview of clickstream clusters using a Self-Organizing Map with Markov chain models. Then the analyst can interactively explore the clusters through an intuitive user interface. He can either obtain summarization of a selected group of data or further refine the clustering result. We evaluated our system using two different datasets from eBay. Analysts who were working on the same data have confirmed the system's effectiveness in extracting user behavior patterns from complex datasets and enhancing their ability to reason.",
"title": ""
}
] | [
{
"docid": "f29d0ea5ff5c96dadc440f4d4aa229c6",
"text": "Wikipedia infoboxes are a valuable source of structured knowledge for global knowledge sharing. However, infobox information is very incomplete and imbalanced among the Wikipedias in different languages. It is a promising but challenging problem to utilize the rich structured knowledge from a source language Wikipedia to help complete the missing infoboxes for a target language. In this paper, we formulate the problem of cross-lingual knowledge extraction from multilingual Wikipedia sources, and present a novel framework, called WikiCiKE, to solve this problem. An instancebased transfer learning method is utilized to overcome the problems of topic drift and translation errors. Our experimental results demonstrate that WikiCiKE outperforms the monolingual knowledge extraction method and the translation-based method.",
"title": ""
},
{
"docid": "97a1d44956f339a678da4c7a32b63bf6",
"text": "As a first step towards agents learning to communicate about their visual environment, we propose a system that, given visual representations of a referent (CAT) and a context (SOFA), identifies their discriminative attributes, i.e., properties that distinguish them (has_tail). Moreover, although supervision is only provided in terms of discriminativeness of attributes for pairs, the model learns to assign plausible attributes to specific objects (SOFA-has_cushion). Finally, we present a preliminary experiment confirming the referential success of the predicted discriminative attributes.",
"title": ""
},
{
"docid": "98ff207ca344eb058c6bf7ba87751822",
"text": "Ultra-wideband radar is an excellent tool for nondestructive examination of walls and highway structures. Therefore often steep edged narrow pulses with rise-, fall-times in the range of 100 ps are used. For digitizing of the reflected pulses a down conversion has to be accomplished. A new low cost sampling down converter with a sampling phase detector for use in ultra-wideband radar applications is presented.",
"title": ""
},
{
"docid": "d14812771115b4736c6d46aecadb2d8a",
"text": "This article reports on a helical spring-like piezoresistive graphene strain sensor formed within a microfluidic channel. The helical spring has a tubular hollow structure and is made of a thin graphene layer coated on the inner wall of the channel using an in situ microfluidic casting method. The helical shape allows the sensor to flexibly respond to both tensile and compressive strains in a wide dynamic detection range from 24 compressive strain to 20 tensile strain. Fabrication of the sensor involves embedding a helical thin metal wire with a plastic wrap into a precursor solution of an elastomeric polymer, forming a helical microfluidic channel by removing the wire from cured elastomer, followed by microfluidic casting of a graphene thin layer directly inside the helical channel. The wide dynamic range, in conjunction with mechanical flexibility and stretchability of the sensor, will enable practical wearable strain sensor applications where large strains are often involved.",
"title": ""
},
{
"docid": "0f927fc7b8005ee6bb6ec22d8070a062",
"text": "We propose a Dynamic-Spatial-Attention (DSA) Recurrent Neural Network (RNN) for anticipating accidents in dashcam videos (Fig. 1). Our DSA-RNN learns to (1) distribute soft-attention to candidate objects dynamically to gather subtle cues and (2) model the temporal dependencies of all cues to robustly anticipate an accident. Anticipating accidents is much less addressed than anticipating events such as changing a lane, making a turn, etc., since accidents are rare to be observed and can happen in many different ways mostly in a sudden. To overcome these challenges, we (1) utilize state-of-the-art object detector [3] to detect candidate objects, and (2) incorporate full-frame and object-based appearance and motion features in our model. We also harvest a diverse dataset of 678 dashcam accident videos on the web (Fig. 3). The dataset is unique, since various accidents (e.g., a motorbike hits a car, a car hits another car, etc.) occur in all videos. We manually mark the time-location of accidents and use them as supervision to train and evaluate our method. We show that our method anticipates accidents about 2 seconds before they occur with 80% recall and 56.14% precision. Most importantly, it achieves the highest mean average precision (74.35%) outperforming other baselines without attention or RNN. 2 Fu-Hsiang Chan, Yu-Ting Chen, Yu Xiang, Min Sun",
"title": ""
},
{
"docid": "a4267e0cd6300dc128bfe9de62322ac7",
"text": "According to the most common definition, idioms are linguistic expressions whose overall meaning cannot be predicted from the meanings of the constituent parts Although we agree with the traditional view that there is no complete predictability, we suggest that there is a great deal of systematic conceptual motivation for the meaning of most idioms Since most idioms are based on conceptual metaphors and metonymies, systematic motivation arises from sets of 'conceptual mappings or correspondences' that obtain between a source and a target domain in the sense of Lakoff and Koiecses (1987) We distinguish among three aspects of idiomatic meaning First, the general meaning of idioms appears to be determined by the particular 'source domains' that apply to a particular target domain Second, more specific aspects ot idiomatic meaning are provided by the 'ontological mapping that applies to a given idiomatic expression Third, connotative aspects ot idiomatic meaning can be accounted for by 'epistemic correspondences' Finally, we also present an informal experimental study the results of which show that the cognitive semantic view can facilitate the learning of idioms for non-native speakers",
"title": ""
},
{
"docid": "c38a6685895c23620afb6570be4c646b",
"text": "Today, artificial neural networks (ANNs) are widely used in a variety of applications, including speech recognition, face detection, disease diagnosis, etc. And as the emerging field of ANNs, Long Short-Term Memory (LSTM) is a recurrent neural network (RNN) which contains complex computational logic. To achieve high accuracy, researchers always build large-scale LSTM networks which are time-consuming and power-consuming. In this paper, we present a hardware accelerator for the LSTM neural network layer based on FPGA Zedboard and use pipeline methods to parallelize the forward computing process. We also implement a sparse LSTM hidden layer, which consumes fewer storage resources than the dense network. Our accelerator is power-efficient and has a higher speed than ARM Cortex-A9 processor.",
"title": ""
},
{
"docid": "a3f2e552e5bbf2b4bab55963ee84915d",
"text": "-risks, and balance formalization and portfolio management, improvement.",
"title": ""
},
{
"docid": "e1096df0a86d37c11ed4a31d9e67ac6e",
"text": "............................................................................................................................................... 4",
"title": ""
},
{
"docid": "f5d412649f974245fb7142ea66e3e794",
"text": "Inflammation clearly occurs in pathologically vulnerable regions of the Alzheimer's disease (AD) brain, and it does so with the full complexity of local peripheral inflammatory responses. In the periphery, degenerating tissue and the deposition of highly insoluble abnormal materials are classical stimulants of inflammation. Likewise, in the AD brain damaged neurons and neurites and highly insoluble amyloid beta peptide deposits and neurofibrillary tangles provide obvious stimuli for inflammation. Because these stimuli are discrete, microlocalized, and present from early preclinical to terminal stages of AD, local upregulation of complement, cytokines, acute phase reactants, and other inflammatory mediators is also discrete, microlocalized, and chronic. Cumulated over many years, direct and bystander damage from AD inflammatory mechanisms is likely to significantly exacerbate the very pathogenic processes that gave rise to it. Thus, animal models and clinical studies, although still in their infancy, strongly suggest that AD inflammation significantly contributes to AD pathogenesis. By better understanding AD inflammatory and immunoregulatory processes, it should be possible to develop anti-inflammatory approaches that may not cure AD but will likely help slow the progression or delay the onset of this devastating disorder.",
"title": ""
},
{
"docid": "73a998535ab03730595ce5d9c1f071f7",
"text": "This article familiarizes counseling psychologists with qualitative research methods in psychology developed in the tradition of European phenomenology. A brief history includes some of Edmund Husserl’s basic methods and concepts, the adoption of existential-phenomenology among psychologists, and the development and formalization of qualitative research procedures in North America. The choice points and alternatives in phenomenological research in psychology are delineated. The approach is illustrated by a study of a recovery program for persons repeatedly hospitalized for chronic mental illness. Phenomenological research is compared with other qualitative methods, and some of its benefits for counseling psychology are identified.",
"title": ""
},
{
"docid": "8bd9e3fe5d2b6fe8d58a86baf3de3522",
"text": "Hand pose estimation from single depth images is an essential topic in computer vision and human computer interaction. Despite recent advancements in this area promoted by convolutional neural networks, accurate hand pose estimation is still a challenging problem. In this paper we propose a novel approach named as Pose guided structured Region Ensemble Network (Pose-REN) to boost the performance of hand pose estimation. Under the guidance of an initially estimated pose, the proposed method extracts regions from the feature maps of convolutional neural network and generates more optimal and representative features for hand pose estimation. The extracted feature regions are then integrated hierarchically according to the topology of hand joints by tree-structured fully connections to regress the refined hand pose. The final hand pose is obtained by an iterative cascaded method. Comprehensive experiments on public hand pose datasets demonstrate that our proposed method outperforms state-of-the-art algorithms.",
"title": ""
},
{
"docid": "78539b627037a491dade4a1e8abdaa0b",
"text": "Scholarly citations from one publication to another, expressed as reference lists within academic articles, are core elements of scholarly communication. Unfortunately, they usually can be accessed en masse only by paying significant subscription fees to commercial organizations, while those few services that do made them available for free impose strict limitations on their reuse. In this paper we provide an overview of the OpenCitations Project (http://opencitations.net) undertaken to remedy this situation, and of its main product, the OpenCitations Corpus, which is an open repository of accurate bibliographic citation data harvested from the scholarly literature, made available in RDF under a Creative Commons public domain dedication. RASH version: https://w3id.org/oc/paper/occ-lisc2016.html",
"title": ""
},
{
"docid": "f6df414f8f61dbdab32be2f05d921cb8",
"text": "The task of discriminating one object from another is almost trivial for a human being. However, this task is computationally taxing for most modern machine learning methods, whereas, we perform this task at ease given very few examples for learning. It has been proposed that the quick grasp of concept may come from the shared knowledge between the new example and examples previously learned. We believe that the key to one-shot learning is the sharing of common parts as each part holds immense amounts of information on how a visual concept is constructed. We propose an unsupervised method for learning a compact dictionary of image patches representing meaningful components of an objects. Using those patches as features, we build a compositional model that outperforms a number of popular algorithms on a one-shot learning task. We demonstrate the effectiveness of this approach on hand-written digits and show that this model generalizes to multiple datasets.",
"title": ""
},
{
"docid": "6c9c06604d5ef370b803bb54b4fe1e0c",
"text": "Self-paced learning and hard example mining re-weight training instances to improve learning accuracy. This paper presents two improved alternatives based on lightweight estimates of sample uncertainty in stochastic gradient descent (SGD): the variance in predicted probability of the correct class across iterations of minibatch SGD, and the proximity of the correct class probability to the decision threshold. Extensive experimental results on six datasets show that our methods reliably improve accuracy in various network architectures, including additional gains on top of other popular training techniques, such as residual learning, momentum, ADAM, batch normalization, dropout, and distillation.",
"title": ""
},
{
"docid": "eaf7f022e04a27c1616bff2d052d0e06",
"text": "The human hand moves in complex and high-dimensional ways, making estimation of 3D hand pose configurations from images alone a challenging task. In this work we propose a method to learn a statistical hand model represented by a cross-modal trained latent space via a generative deep neural network. We derive an objective function from the variational lower bound of the VAE framework and jointly optimize the resulting cross-modal KL-divergence and the posterior reconstruction objective, naturally admitting a training regime that leads to a coherent latent space across multiple modalities such as RGB images, 2D keypoint detections or 3D hand configurations. Additionally, it grants a straightforward way of using semi-supervision. This latent space can be directly used to estimate 3D hand poses from RGB images, outperforming the state-of-the art in different settings. Furthermore, we show that our proposed method can be used without changes on depth images and performs comparably to specialized methods. Finally, the model is fully generative and can synthesize consistent pairs of hand configurations across modalities. We evaluate our method on both RGB and depth datasets and analyze the latent space qualitatively.",
"title": ""
},
{
"docid": "b50efa7b82d929c1b8767e23e8359a06",
"text": "Intrusion detection (ID) is an important component of infrastructure protection mechanisms. Intrusion detection systems (IDSs) need to be accurate, adaptive, and extensible. Given these requirements and the complexities of today's network environments, we need a more systematic and automated IDS development process rather that the pure knowledge encoding and engineering approaches. This article describes a novel framework, MADAM ID, for Mining Audit Data for Automated Models for Instrusion Detection. This framework uses data mining algorithms to compute activity patterns from system audit data and extracts predictive features from the patterns. It then applies machine learning algorithms to the audit records taht are processed according to the feature definitions to generate intrusion detection rules. Results from the 1998 DARPA Intrusion Detection Evaluation showed that our ID model was one of the best performing of all the participating systems. We also briefly discuss our experience in converting the detection models produced by off-line data mining programs to real-time modules of existing IDSs.",
"title": ""
},
{
"docid": "58fbd637f7c044aeb0d55ba015c70f61",
"text": "This paper outlines an innovative software development that utilizes Quality of Service (QoS) and parallel technologies in Cisco Catalyst Switches to increase the analytical performance of a Network Intrusion Detection and Protection System (NIDPS) when deployed in highspeed networks. We have designed a real network to present experiments that use a Snort NIDPS. Our experiments demonstrate the weaknesses of NIDPSes, such as inability to process multiple packets and propensity to drop packets in heavy traffic and high-speed networks without analysing them. We tested Snort’s analysis performance, gauging the number of packets sent, analysed, dropped, filtered, injected, and outstanding. We suggest using QoS configuration technologies in a Cisco Catalyst 3560 Series Switch and parallel Snorts to improve NIDPS performance and to reduce the number of dropped packets. Our results show that our novel configuration improves performance.",
"title": ""
}
] | scidocsrr |
d19a61d7f0638212a59ac54bbaee290b | Predicting Motivations of Actions by Leveraging Text | [
{
"docid": "2052b47be2b5e4d0c54ab0be6ae1958b",
"text": "Discriminative training approaches like structural SVMs have shown much promise for building highly complex and accurate models in areas like natural language processing, protein structure prediction, and information retrieval. However, current training algorithms are computationally expensive or intractable on large datasets. To overcome this bottleneck, this paper explores how cutting-plane methods can provide fast training not only for classification SVMs, but also for structural SVMs. We show that for an equivalent “1-slack” reformulation of the linear SVM training problem, our cutting-plane method has time complexity linear in the number of training examples. In particular, the number of iterations does not depend on the number of training examples, and it is linear in the desired precision and the regularization parameter. Furthermore, we present an extensive empirical evaluation of the method applied to binary classification, multi-class classification, HMM sequence tagging, and CFG parsing. The experiments show that the cutting-plane algorithm is broadly applicable and fast in practice. On large datasets, it is typically several orders of magnitude faster than conventional training methods derived from decomposition methods like SVM-light, or conventional cutting-plane methods. Implementations of our methods are available at www.joachims.org .",
"title": ""
}
] | [
{
"docid": "15975baddd2e687d14588fcfc674bbc8",
"text": "The treatment of external genitalia trauma is diverse according to the nature of trauma and injured anatomic site. The classification of trauma is important to establish a strategy of treatment; however, to date there has been less effort to make a classification for trauma of external genitalia. The classification of external trauma in male could be established by the nature of injury mechanism or anatomic site: accidental versus self-mutilation injury and penis versus penis plus scrotum or perineum. Accidental injury covers large portion of external genitalia trauma because of high prevalence and severity of this disease. The aim of this study is to summarize the mechanism and treatment of the traumatic injury of penis. This study is the first review describing the issue.",
"title": ""
},
{
"docid": "1389323613225897330d250e9349867b",
"text": "Description: The field of data mining lies at the confluence of predictive analytics, statistical analysis, and business intelligence. Due to the ever–increasing complexity and size of data sets and the wide range of applications in computer science, business, and health care, the process of discovering knowledge in data is more relevant than ever before. This book provides the tools needed to thrive in today s big data world. The author demonstrates how to leverage a company s existing databases to increase profits and market share, and carefully explains the most current data science methods and techniques. The reader will learn data mining by doing data mining . By adding chapters on data modelling preparation, imputation of missing data, and multivariate statistical analysis, Discovering Knowledge in Data, Second Edition remains the eminent reference on data mining .",
"title": ""
},
{
"docid": "ae4974a3d7efedab7cd6651101987e79",
"text": "Fisher Kernels and Deep Learning were two developments with significant impact on large-scale object categorization in the last years. Both approaches were shown to achieve state-of-the-art results on large-scale object categorization datasets, such as ImageNet. Conceptually, however, they are perceived as very different and it is not uncommon for heated debates to spring up when advocates of both paradigms meet at conferences or workshops. In this work, we emphasize the similarities between both architectures rather than their differences and we argue that such a unified view allows us to transfer ideas from one domain to the other. As a concrete example we introduce a method for learning a support vector machine classifier with Fisher kernel at the same time as a task-specific data representation. We reinterpret the setting as a multi-layer feed forward network. Its final layer is the classifier, parameterized by a weight vector, and the two previous layers compute Fisher vectors, parameterized by the coefficients of a Gaussian mixture model. We introduce a gradient descent based learning algorithm that, in contrast to other feature learning techniques, is not just derived from intuition or biological analogy, but has a theoretical justification in the framework of statistical learning theory. Our experiments show that the new training procedure leads to significant improvements in classification accuracy while preserving the modularity and geometric interpretability of a support vector machine setup.",
"title": ""
},
{
"docid": "60fbaecc398f04bdb428ccec061a15a5",
"text": "A decade earlier, work on modeling and analyzing social network, was primarily focused on manually collected datasets where the friendship links were sparse but relatively noise free (i.e. all links represented strong physical relation). With the popularity of online social networks, the notion of “friendship” changed dramatically. The data collection, now although automated, contains dense friendship links but the links contain noisier information (i.e. some weaker relationships). The aim of this study is to identify these weaker links and suggest how these links (identification) play a vital role in improving social media design elements such as privacy control, detection of auto-bots, friend introductions, information prioritization and so on. The binary metric used so far for modeling links in social network (i.e. friends or not) is of little importance as it groups all our relatives, close friends and acquaintances in the same category. Therefore a popular notion of tie-strength has been incorporated for modeling links. In this paper, a predictive model is presented that helps evaluate tie-strength for each link in network based on transactional features (e.g. communication, file transfer, photos). The model predicts tie strength with 76.4% efficiency. This work also suggests that important link properties manifest similarly across different social media sites.",
"title": ""
},
{
"docid": "6ddf8cc094a38ebe47d51303f4792dc6",
"text": "The symmetric travelling salesman problem is a real world combinatorial optimization problem and a well researched domain. When solving combinatorial optimization problems such as the travelling salesman problem a low-level construction heuristic is usually used to create an initial solution, rather than randomly creating a solution, which is further optimized using techniques such as tabu search, simulated annealing and genetic algorithms, amongst others. These heuristics are usually manually derived by humans and this is a time consuming process requiring many man hours. The research presented in this paper forms part of a larger initiative aimed at automating the process of deriving construction heuristics for combinatorial optimization problems.\n The study investigates genetic programming to induce low-level construction heuristics for the symmetric travelling salesman problem. While this has been examined for other combinatorial optimization problems, to the authors' knowledge this is the first attempt at evolving low-level construction heuristics for the travelling salesman problem. In this study a generational genetic programming algorithm randomly creates an initial population of low-level construction heuristics which is iteratively refined over a set number of generations by the processes of fitness evaluation, selection of parents and application of genetic operators.\n The approach is tested on 23 problem instances, of varying problem characteristics, from the TSPLIB and VLSI benchmark sets. The evolved heuristics were found to perform better than the human derived heuristic, namely, the nearest neighbourhood heuristic, generally used to create initial solutions for the travelling salesman problem.",
"title": ""
},
{
"docid": "4f3be105eaaad6d3741c370caa8e764e",
"text": "Ankylosing spondylitis (AS) is a chronic systemic inflammatory disease that affects mainly the axial skeleton and causes significant pain and disability. Aquatic (water-based) exercise may have a beneficial effect in various musculoskeletal conditions. The aim of this study was to compare the effectiveness of aquatic exercise interventions with land-based exercises (home-based exercise) in the treatment of AS. Patients with AS were randomly assigned to receive either home-based exercise or aquatic exercise treatment protocol. Home-based exercise program was demonstrated by a physiotherapist on one occasion and then, exercise manual booklet was given to all patients in this group. Aquatic exercise program consisted of 20 sessions, 5× per week for 4 weeks in a swimming pool at 32–33 °C. All the patients in both groups were assessed for pain, spinal mobility, disease activity, disability, and quality of life. Evaluations were performed before treatment (week 0) and after treatment (week 4 and week 12). The baseline and mean values of the percentage changes calculated for both groups were compared using independent sample t test. Paired t test was used for comparison of pre- and posttreatment values within groups. A total of 69 patients with AS were included in this study. We observed significant improvements for all parameters [pain score (VAS) visual analog scale, lumbar flexion/extension, modified Schober test, chest expansion, bath AS functional index, bath AS metrology index, bath AS disease activity index, and short form-36 (SF-36)] in both groups after treatment at week 4 and week 12 (p < 0.05). Comparison of the percentage changes of parameters both at week 4 and week 12 relative to pretreatment values showed that improvement in VAS (p < 0.001) and bodily pain (p < 0.001), general health (p < 0.001), vitality (p < 0.001), social functioning (p < 0.001), role limitations due to emotional problems (p < 0.001), and general mental health (p < 0.001) subparts of SF-36 were better in aquatic exercise group. It is concluded that a water-based exercises produced better improvement in pain score and quality of life of the patients with AS compared with home-based exercise.",
"title": ""
},
{
"docid": "31ef6c21c9877df266e0fd0506c3e90a",
"text": "My research has centered around understanding the colorful appearance of physical and digital paintings and images. My work focuses on decomposing images or videos into more editable data structures called layers, to enable efficient image or video re-editing. Given a time-lapse painting video, we can recover translucent layer strokes from every frame pairs by maximizing translucency of layers for its maximum re-usability, under either digital color compositing model or a physically inspired nonlinear color layering model, after which, we apply a spatial-temporal clustering on strokes to obtain semantic layers for further editing, such as global recoloring and local recoloring, spatial-temporal gradient recoloring and so on. With a single image input, we use the convex shape geometry intuition of color points distribution in RGB space, to help extract a small size palette from a image and then solve an optimization to extract translucent RGBA layers, under digital alpha compositing model. The translucent layers are suitable for global and local image recoloring and new object insertion as layers efficiently. Alternatively, we can apply an alternating least square optimization to extract multi-spectral physical pigment parameters from a single digitized physical painting image, under a physically inspired nonlinear color mixing model, with help of some multi-spectral pigment parameters priors. With these multi-spectral pigment parameters and their mixing layers, we demonstrate tonal adjustments, selection masking, recoloring, physical pigment understanding, palette summarization and edge enhancement. Our recent ongoing work introduces an extremely scalable and efficient yet simple palette-based image decomposition algorithm to extract additive mixing layers from single image. Our approach is based on the geometry of images in RGBXY-space. This new geometric approach is orders of magnitude more efficient than previous work and requires no numerical optimization. We demonstrate a real-time layer updating GUI. We also present a palette-based framework for color composition for visual applications, such as image and video harmonization, color transfer and so on.",
"title": ""
},
{
"docid": "db02af0f6c2994e4348c1f7c4f3191ce",
"text": "American students rank well below international peers in the disciplines of science, technology, engineering, and mathematics (STEM). Early exposure to STEM-related concepts is critical to later academic achievement. Given the rise of tablet-computer use in early childhood education settings, interactive technology might be one particularly fruitful way of supplementing early STEM education. Using a between-subjects experimental design, we sought to determine whether preschoolers could learn a fundamental math concept (i.e., measurement with non-standard units) from educational technology, and whether interactivity is a crucial component of learning from that technology. Participants who either played an interactive tablet-based game or viewed a non-interactive video demonstrated greater transfer of knowledge than those assigned to a control condition. Interestingly, interactivity contributed to better performance on near transfer tasks, while participants in the non-interactive condition performed better on far transfer tasks. Our findings suggest that, while preschool-aged children can learn early STEM skills from educational technology, interactivity may only further support learning in certain",
"title": ""
},
{
"docid": "c8a9919a2df2cfd730816cd0171f08dd",
"text": "In this paper, we propose a new deep network that learns multi-level deep representations for image emotion classi fication (MldrNet). Image emotion can be recognized through image semantics, image aesthetics and low-level visual fea tures from both global and local views. Existing image emotion classification works using hand-crafted features o r deep features mainly focus on either low-level visual featu res or semantic-level image representations without taking al l factors into consideration. Our proposed MldrNet unifies deep representations of three levels, i.e. image semantics , image aesthetics and low-level visual features through mul tiple instance learning (MIL) in order to effectively cope wit h noisy labeled data, such as images collected from the Intern et. Extensive experiments on both Internet images and abstract paintings demonstrate the proposed method outperforms the state-of-the-art methods using deep features or hand-craf ted features. The proposed approach also outperforms the state of-the-art methods with at least 6% performance improvement in terms of overall classification accuracy.",
"title": ""
},
{
"docid": "c13c97749874fd32972f6e8b75fd20d1",
"text": "Text categorization is the task of automatically assigning unlabeled text documents to some predefined category labels by means of an induction algorithm. Since the data in text categorization are high-dimensional, feature selection is broadly used in text categorization systems for reducing the dimensionality. In the literature, there are some widely known metrics such as information gain and document frequency thresholding. Recently, a generative graphical model called latent dirichlet allocation (LDA) that can be used to model and discover the underlying topic structures of textual data, was proposed. In this paper, we use the hidden topic analysis of LDA for feature selection and compare it with the classical feature selection metrics in text categorization. For the experiments, we use SVM as the classifier and tf∗idf weighting for weighting the terms. We observed that almost in all metrics, information gain performs best at all keyword numbers while the LDA-based metrics perform similar to chi-square and document frequency thresholding.",
"title": ""
},
{
"docid": "8653252228548a8df272171b930fc97f",
"text": "Melatonin, hormone of the pineal gland, is concerned with biological timing. It is secreted at night in all species and in ourselves is thereby associated with sleep, lowered core body temperature, and other night time events. The period of melatonin secretion has been described as 'biological night'. Its main function in mammals is to 'transduce' information about the length of the night, for the organisation of daylength dependent changes, such as reproductive competence. Exogenous melatonin has acute sleepiness-inducing and temperature-lowering effects during 'biological daytime', and when suitably timed (it is most effective around dusk and dawn) it will shift the phase of the human circadian clock (sleep, endogenous melatonin, core body temperature, cortisol) to earlier (advance phase shift) or later (delay phase shift) times. The shifts induced are sufficient to synchronise to 24 h most blind subjects suffering from non-24 h sleep-wake disorder, with consequent benefits for sleep. Successful use of melatonin's chronobiotic properties has been reported in other sleep disorders associated with abnormal timing of the circadian system: jetlag, shiftwork, delayed sleep phase syndrome, some sleep problems of the elderly. No long-term safety data exist, and the optimum dose and formulation for any application remains to be clarified.",
"title": ""
},
{
"docid": "d1be704e4d81ab1466482a4924f00474",
"text": "Fetus-in-fetu (FIF) is a rare congenital condition in which a fetiform mass is detected in the host abdomen and also in other sites such as the intracranium, thorax, head, and neck. This condition has been rarely reported in the literature. Herein, we report the case of a fetus presenting with abdominal cystic mass and ascites and prenatally diagnosed as meconium pseudocyst. Explorative laparotomy revealed an irregular fetiform mass in the retroperitoneum within a fluid-filled cyst. The mass contained intestinal tract, liver, pancreas, and finger. Fetal abdominal cystic mass has been identified in a broad spectrum of diseases. However, as in our case, FIF is often overlooked during differential diagnosis. FIF should also be differentiated from other conditions associated with fetal abdominal masses.",
"title": ""
},
{
"docid": "562db49b88675b612e697ab411c948a4",
"text": "Infrared (IR) guided missiles remain a threat to both military and civilian aircraft, and as such, the development of effective countermeasures against this threat remains vital. A simulation has been developed to assess the effectiveness of a jammer signal against a conical-scan seeker by testing critical jammer parameters. The critical parameters of a jammer signal are the jam-to-signal (J/S) ratio, the jammer frequency and the jammer duty cycle. It was found that the most effective jammer signal is one with a modulated envelope.",
"title": ""
},
{
"docid": "1abef5c69eab484db382cdc2a2a1a73f",
"text": "Conventional methods of 3D object generative modeling learn volumetric predictions using deep networks with 3D convolutional operations, which are direct analogies to classical 2D ones. However, these methods are computationally wasteful in attempt to predict 3D shapes, where information is rich only on the surfaces. In this paper, we propose a novel 3D generative modeling framework to efficiently generate object shapes in the form of dense point clouds. We use 2D convolutional operations to predict the 3D structure from multiple viewpoints and jointly apply geometric reasoning with 2D projection optimization. We introduce the pseudo-renderer, a differentiable module to approximate the true rendering operation, to synthesize novel depth maps for optimization. Experimental results for single-image 3D object reconstruction tasks show that we outperforms state-of-the-art methods in terms of shape similarity and prediction density.",
"title": ""
},
{
"docid": "519e8ee14d170ce92eecc760e810ade4",
"text": "Transcript-based annotation and pedigree analysis are two basic steps in the computational analysis of whole-exome sequencing experiments in genetic diagnostics and disease-gene discovery projects. Here, we present Jannovar, a stand-alone Java application as well as a Java library designed to be used in larger software frameworks for exome and genome analysis. Jannovar uses an interval tree to identify all transcripts affected by a given variant, and provides Human Genome Variation Society-compliant annotations both for variants affecting coding sequences and splice junctions as well as untranslated regions and noncoding RNA transcripts. Jannovar can also perform family-based pedigree analysis with Variant Call Format (VCF) files with data from members of a family segregating a Mendelian disorder. Using a desktop computer, Jannovar requires a few seconds to annotate a typical VCF file with exome data. Jannovar is freely available under the BSD2 license. Source code as well as the Java application and library file can be downloaded from http://compbio.charite.de (with tutorial) and https://github.com/charite/jannovar.",
"title": ""
},
{
"docid": "3dcce7058de4b41ad3614561832448a4",
"text": "Declarative models play an important role in most software design activities, by allowing designs to be constructed that selectively abstract over complex implementation details. In the user interface setting, Model-Based User Interface Development Environments (MB-UIDEs) provide a context within which declarative models can be constructed and related, as part of the interface design process. However, such declarative models are not usually directly executable, and may be difficult to relate to existing software components. It is therefore important that MB-UIDEs both fit in well with existing software architectures and standards, and provide an effective route from declarative interface specification to running user interfaces. This paper describes how user interface software is generated from declarative descriptions in the Teallach MB-UIDE. Distinctive features of Teallach include its open architecture, which connects directly to existing applications and widget sets, and the generation of executable interface applications in Java. This paper focuses on how Java programs, organized using the model-view-controller pattern (MVC), are generated from the task, domain and presentation models of Teallach.",
"title": ""
},
{
"docid": "6b5950c88c8cb414a124e74e9bc2ed00",
"text": "As most regular readers of this TRANSACTIONS know, the development of digital signal processing techniques for applications involving image or picture data has been an increasingly active research area for the past decade. Collectively, t h s work is normally characterized under the generic heading “digital image processing.” Interestingly, the two books under review here share this heading as their title. Both are quite ambitious undertakings in that they attempt to integrate contributions from many disciplines (classical systems theory, digital signal processing, computer science, statistical communications, etc.) into unified, comprehensive presentations. In this regard it can be said that both are to some extent successful, although in quite different ways. Why the unusual step of a joint review? A brief overview of the two books reveals that they share not only a common title, but also similar objectives/purposes, intended audiences, structural organizations, and lists of topics considered. A more careful study reveals that substantial differences do exist, however, in the style and depth of subject treatment (as reflected in the difference in their lengths). Given their almost simultaneous publication, it seems appropriate to discuss these similarities/differences in a common setting. After much forethought (and two drafts), the reviewer decided to structure this review by describing the general topical material in their (joint) major sections, with supplementary comments directed toward the individual texts. It is hoped that this will provide the reader with a brief survey of the books’ contents and some flavor of their contrasting approaches. To avoid the identity problems of the joint title, each book will be subsequently referred to using the respective authors’ names: Gonzalez/Wintz and Pratt. Subjects will be correlated with chapter number(s) and approximate l ngth of coverage.",
"title": ""
},
{
"docid": "debdeab8df4e363762683d527edbe61e",
"text": "The diaphragm is the primary muscle involved in breathing and other non-primarily respiratory functions such as the maintenance of correct posture and lumbar and sacroiliac movement. It intervenes to facilitate cleaning of the upper airways through coughing, facilitates the evacuation of the intestines, and promotes the redistribution of the body's blood. The diaphragm also has the ability to affect the perception of pain and the emotional state of the patient, functions that are the subject of this article. The aim of this article is to gather for the first time, within a single text, information on the nonrespiratory functions of the diaphragm muscle and its analgesic and emotional response functions. It also aims to highlight and reflect on the fact that when the diaphragm is treated manually, a daily occurrence for manual operators, it is not just an area of musculature that is treated but the entire body, including the psyche. This reflection allows for a multidisciplinary approach to the diaphragm and the collaboration of various medical and nonmedical practitioners, with the ultimate goal of regaining or improving the patient's physical and mental well-being.",
"title": ""
},
{
"docid": "813be8ec8a933ff3966c739653212487",
"text": "This paper describes a method for vision-based unmanned aerial vehicle (UAV) motion estimation from multiple planar homographies. The paper also describes the determination of the relative displacement between different UAVs employing techniques for blob feature extraction and matching. It then presents and shows experimental results of the application of the proposed technique to multi-UAV detection of forest fires",
"title": ""
},
{
"docid": "1b7a10807e85018743338c7e59075987",
"text": "We propose a 600 GHz data transmission of high definition television using the combination of a photonic emission using an uni-travelling carrier photodiode and an electronic detection, featuring a very low power at the receiver. Only 10 nW of THz power at 600GHz were sufficient to ensure real-time error-free operation. This combination of photonics at emission and heterodyne detection lead to achieve THz wireless links with a safe level of electromagnetic exposure.",
"title": ""
}
] | scidocsrr |
c1ed93ce1ab856b0c97cbf38270dd1bf | WHUIRGroup at TREC 2016 Clinical Decision Support Task | [
{
"docid": "03b08a01be48aaa76684411b73e5396c",
"text": "The goal of TREC 2015 Clinical Decision Support Track was to retrieve biomedical articles relevant for answering three kinds of generic clinical questions, namely diagnosis, test, and treatment. In order to achieve this purpose, we investigated three approaches to improve the retrieval of relevant articles: modifying queries, improving indexes, and ranking with ensembles. Our final submissions were a combination of several different configurations of these approaches. Our system mainly focused on the summary fields of medical reports. We built two different kinds of indexes – an inverted index on the free text and a second kind of indexes on the Unified Medical Language System (UMLS) concepts within the entire articles that were recognized by MetaMap. We studied the variations of including UMLS concepts at paragraph and sentence level and experimented with different thresholds of MetaMap matching scores to filter UMLS concepts. The query modification process in our system involved automatic query construction, pseudo relevance feedback, and manual inputs from domain experts. Furthermore, we trained a re-ranking sub-system based on the results of TREC 2014 Clinical Decision Support track using Indri’s Learning to Rank package, RankLib. Our experiments showed that the ensemble approach could improve the overall results by boosting the ranking of articles that are near the top of several single ranked lists.",
"title": ""
}
] | [
{
"docid": "edbad8d3889a431c16e4a51d0c1cc19c",
"text": "We propose to automatically create capsule wardrobes. Given an inventory of candidate garments and accessories, the algorithm must assemble a minimal set of items that provides maximal mix-and-match outfits. We pose the task as a subset selection problem. To permit efficient subset selection over the space of all outfit combinations, we develop submodular objective functions capturing the key ingredients of visual compatibility, versatility, and user-specific preference. Since adding garments to a capsule only expands its possible outfits, we devise an iterative approach to allow near-optimal submodular function maximization. Finally, we present an unsupervised approach to learn visual compatibility from \"in the wild\" full body outfit photos; the compatibility metric translates well to cleaner catalog photos and improves over existing methods. Our results on thousands of pieces from popular fashion websites show that automatic capsule creation has potential to mimic skilled fashionistas in assembling flexible wardrobes, while being significantly more scalable.",
"title": ""
},
{
"docid": "e1f2647131e9194bc4edfd9c629900a8",
"text": "Thomson coil actuators (also known as repulsion coil actuators) are well suited for vacuum circuit breakers when fast operation is desired such as for hybrid AC and DC circuit breaker applications. This paper presents investigations on how the actuator drive circuit configurations as well as their discharging pulse patterns affect the magnetic force and therefore the acceleration, as well as the mechanical robustness of these actuators. Comprehensive multi-physics finite-element simulations of the Thomson coil actuated fast mechanical switch are carried out to study the operation transients and how to maximize the actuation speed. Different drive circuits are compared: three single switch circuits are evaluated; the pulse pattern of a typical pulse forming network circuit is studied, concerning both actuation speed and maximum stress; a two stage drive circuit is also investigated. A 630 A, 15 kV / 1 ms prototype employing a vacuum interrupter with 6 mm maximum open gap was developed and tested. The total moving mass accelerated by the actuator is about 1.2 kg. The measured results match well with simulated results in the FEA study.",
"title": ""
},
{
"docid": "bb9829b182241f70dbc1addd1452c09d",
"text": "This paper presents the first complete 2.5 V, 77 GHz chipset for Doppler radar and imaging applications fabricated in 0.13 mum SiGe HBT technology. The chipset includes a voltage-controlled oscillator with -101.6 dBc/Hz phase noise at 1 MHz offset, an 25 dB gain low-noise amplifier, a novel low-voltage double-balanced Gilbert-cell mixer with two mm-wave baluns and IF amplifier achieving 12.8 dB noise figure and an OP1dB of +5 dBm, a 99 GHz static frequency divider consuming a record low 75 mW, and a power amplifier with 19 dB gain, +14.4 dBm saturated power, and 15.7% PAE. Monolithic spiral inductors and transformers result in the lowest reported 77 GHz receiver core area of only 0.45 mm times 0.30 mm. Simplified circuit topologies allow 77 GHz operation up to 125degC from 2.5 V/1.8 V supplies. Technology splits of the SiGe HBTs are employed to determine the optimum HBT profile for mm-wave performance.",
"title": ""
},
{
"docid": "45c515da4f8e9c383f6d4e0fa6e09192",
"text": "In this paper, we demonstrate our Img2UML system tool. This system tool eliminates the gap between pixel-based diagram and engineering model, that it supports the extraction of the UML class model from images and produces an XMI file of the UML model. In addition to this, Img2UML offers a repository of UML class models of images that have been collected from the Internet. This project has both industrial and academic aims: for industry, this tool proposals a method that enables the updating of software design documentation (that typically contains UML images). For academia, this system unlocks a corpus of UML models that are publicly available, but not easily analyzable for scientific studies.",
"title": ""
},
{
"docid": "3ba6a250322d67cd0a91b703d75b88dc",
"text": "Untethered robots miniaturized to the length scale of millimeter and below attract growing attention for the prospect of transforming many aspects of health care and bioengineering. As the robot size goes down to the order of a single cell, previously inaccessible body sites would become available for high-resolution in situ and in vivo manipulations. This unprecedented direct access would enable an extensive range of minimally invasive medical operations. Here, we provide a comprehensive review of the current advances in biomedical untethered mobile milli/microrobots. We put a special emphasis on the potential impacts of biomedical microrobots in the near future. Finally, we discuss the existing challenges and emerging concepts associated with designing such a miniaturized robot for operation inside a biological environment for biomedical applications.",
"title": ""
},
{
"docid": "0573d09bf0fb573b5ad0bdfa7f3c2485",
"text": "Social media have been adopted by many businesses. More and more companies are using social media tools such as Facebook and Twitter to provide various services and interact with customers. As a result, a large amount of user-generated content is freely available on social media sites. To increase competitive advantage and effectively assess the competitive environment of businesses, companies need to monitor and analyze not only the customer-generated content on their own social media sites, but also the textual information on their competitors’ social media sites. In an effort to help companies understand how to perform a social media competitive analysis and transform social media data into knowledge for decision makers and e-marketers, this paper describes an in-depth case study which applies text mining to analyze unstructured text content on Facebook and Twitter sites of the three largest pizza chains: Pizza Hut,",
"title": ""
},
{
"docid": "e72f8ad61a7927fee8b0a32152b0aa4b",
"text": "Geolocation prediction is vital to geospatial applications like localised search and local event detection. Predominately, social media geolocation models are based on full text data, including common words with no geospatial dimension (e.g. today) and noisy strings (tmrw), potentially hampering prediction and leading to slower/more memory-intensive models. In this paper, we focus on finding location indicative words (LIWs) via feature selection, and establishing whether the reduced feature set boosts geolocation accuracy. Our results show that an information gain ratiobased approach surpasses other methods at LIW selection, outperforming state-of-the-art geolocation prediction methods by 10.6% in accuracy and reducing the mean and median of prediction error distance by 45km and 209km, respectively, on a public dataset. We further formulate notions of prediction confidence, and demonstrate that performance is even higher in cases where our model is more confident, striking a trade-off between accuracy and coverage. Finally, the identified LIWs reveal regional language differences, which could be potentially useful for lexicographers.",
"title": ""
},
{
"docid": "6954c2a51c589987ba7e37bd81289ba1",
"text": "TYAs paper looks at some of the algorithms that can be used for effective detection and tracking of vehicles, in particular for statistical analysis. The main methods for tracking discussed and implemented are blob analysis, optical flow and foreground detection. A further analysis is also done testing two of the techniques using a number of video sequences that include different levels of difficulties.",
"title": ""
},
{
"docid": "a64847d15292f9758a337b8481bc7814",
"text": "This paper studies the use of tree edit distance for pattern matching of abstract syntax trees of images generated with tree picture grammars. This was done with a view to measuring its effectiveness in determining image similarity, when compared to current state of the art similarity measures used in Content Based Image Retrieval (CBIR). Eight computer based similarity measures were selected for their diverse methodology and effectiveness. The eight visual descriptors and tree edit distance were tested against some of the images from our corpus of thousands of syntactically generated images. The first and second sets of experiments showed that tree edit distance and Spacial Colour Distribution (SpCD) are the most suited for determining similarity of syntactically generated images. A third set of experiments was performed with tree edit distance and SpCD only. Results obtained showed that while both of them performed well in determining similarity of the generated images, the tree edit distance is better able to detect more subtle human observable image differences than SpCD. Also, tree edit distance more closely models the generative sequence of these tree picture grammars.",
"title": ""
},
{
"docid": "320c5bf641fa348cd1c8fb806558fe68",
"text": "A CMOS low-dropout regulator (LDO) with 3.3 V output voltage and 100 mA output current for system-on-chip applications is presented. The proposed LDO is independent of off-chip capacitor, thus the board space and external pins are reduced. By utilizing dynamic slew-rate enhancement (SRE) circuit and nested Miller compensation (NMC) on LDO structure, the proposed LDO provides high stability during line and load regulation without off-chip load capacitor. The overshot voltage has been limited within 550 mV and settling time is less than 50 mus when load current reducing from 100 mA to 1 mA. By using 30 nA reference current, the quiescent current is 3.3 muA. The experiment results agree with the simulation results. The proposed design is implemented by CSMC 0.5 mum mixed-signal process.",
"title": ""
},
{
"docid": "e8bdec1a8f28631e0a61d9d1b74e4e05",
"text": "As a kernel function in network routers, packet classification requires the incoming packet headers to be checked against a set of predefined rules. There are two trends for packet classification: (1) to examine a large number of packet header fields, and (2) to use software-based solutions on multi-core general purpose processors and virtual machines. Although packet classification has been widely studied, most existing solutions on multi-core systems target the classic 5-field packet classification; it is not easy to scale up their performance with respect to the number of packet header fields. In this work, we present a decomposition-based packet classification approach; it supports large rule sets consisting of a large number of packet header fields. In our approach, range-tree and hashing are used to search the fields of the input packet header in parallel. The partial results from all the fields are represented in rule ID sets; they are merged efficiently to produce the final match result. We implement our approach and evaluate its performance with respect to overall throughput and processing latency for rule set size varying from 1 to 32 K. Experimental results on state-of-the-art 16-core platforms show that, an overall throughput of 48 million packets per second and a processing latency of 2,000 ns per packet can be achieved for a 32 K rule set.",
"title": ""
},
{
"docid": "634509a9d6484ba51d01f9c049551df5",
"text": "In this paper, we propose a joint training approach to voice activity detection (VAD) to address the issue of performance degradation due to unseen noise conditions. Two key techniques are integrated into this deep neural network (DNN) based VAD framework. First, a regression DNN is trained to map the noisy to clean speech features similar to DNN-based speech enhancement. Second, the VAD part to discriminate speech against noise backgrounds is also a DNN trained with a large amount of diversified noisy data synthesized by a wide range of additive noise types. By stacking the classification DNN on top of the enhancement DNN, this integrated DNN can be jointly trained to perform VAD. The feature mapping DNN serves as a noise normalization module aiming at explicitly generating the “clean” features which are easier to be correctly recognized by the following classification DNN. Our experiment results demonstrate the proposed noise-universal DNNbased VAD algorithm achieves a good generalization capacity to unseen noises, and the jointly trained DNNs consistently and significantly outperform the conventional classification-based DNN for all the noise types and signal-to-noise levels tested.",
"title": ""
},
{
"docid": "0ea451a2030603899d9ad95649b73908",
"text": "Distributed artificial intelligence (DAI) is a subfield of artificial intelligence that deals with interactions of intelligent agents. Precisely, DAI attempts to construct intelligent agents that make decisions that allow them to achieve their goals in a world populated by other intelligent agents with their own goals. This paper discusses major concepts used in DAI today. To do this, a taxonomy of DAI is presented, based on the social abilities of an individual agent, the organization of agents, and the dynamics of this organization through time. Social abilities are characterized by the reasoning about other agents and the assessment of a distributed situation. Organization depends on the degree of cooperation and on the paradigm of communication. Finally, the dynamics of organization is characterized by the global coherence of the group and the coordination between agents. A reasonably representative review of recent work done in DAI field is also supplied in order to provide a better appreciation of this vibrant AI field. The paper concludes with important issues in which further research in DAI is needed.",
"title": ""
},
{
"docid": "2f1e059a0c178b3703c31ad31761dadc",
"text": "This paper will serve as an introduction to the body of work on robust subspace recovery. Robust subspace recovery involves finding an underlying low-dimensional subspace in a data set that is possibly corrupted with outliers. While this problem is easy to state, it has been difficult to develop optimal algorithms due to its underlying nonconvexity. This work emphasizes advantages and disadvantages of proposed approaches and unsolved problems in the area.",
"title": ""
},
{
"docid": "3007cf623eff81d46a496e16a0d2d5bc",
"text": "Grounded language learning bridges words like ‘red’ and ‘square’ with robot perception. The vast majority of existing work in this space limits robot perception to vision. In this paper, we build perceptual models that use haptic, auditory, and proprioceptive data acquired through robot exploratory behaviors to go beyond vision. Our system learns to ground natural language words describing objects using supervision from an interactive humanrobot “I Spy” game. In this game, the human and robot take turns describing one object among several, then trying to guess which object the other has described. All supervision labels were gathered from human participants physically present to play this game with a robot. We demonstrate that our multi-modal system for grounding natural language outperforms a traditional, vision-only grounding framework by comparing the two on the “I Spy” task. We also provide a qualitative analysis of the groundings learned in the game, visualizing what words are understood better with multi-modal sensory information as well as identifying learned word meanings that correlate with physical object properties (e.g. ‘small’ negatively correlates with object weight).",
"title": ""
},
{
"docid": "9a6f62dd4fc2e9b7f6be5b30c731367c",
"text": "In this paper we present a filter algorithm for nonlinear programming and prove its global convergence to stationary points. Each iteration is composed of a feasibility phase, which reduces a measure of infeasibility, and an optimality phase, which reduces the objective function in a tangential approximation of the feasible set. These two phases are totally independent, and the only coupling between them is provided by the filter. The method is independent of the internal algorithms used in each iteration, as long as these algorithms satisfy reasonable assumptions on their efficiency. Under standard hypotheses, we show two results: for a filter with minimum size, the algorithm generates a stationary accumulation point; for a slightly larger filter, all accumulation points are stationary.",
"title": ""
},
{
"docid": "5ac6e54d3ce35297c63ea3fd9c5ad0d9",
"text": "In this paper, we intend to propose a new heuristic optimization method, called animal migration optimization algorithm. This algorithm is inspired by the animal migration behavior, which is a ubiquitous phenomenon that can be found in all major animal groups, such as birds, mammals, fish, reptiles, amphibians, insects, and crustaceans. In our algorithm, there are mainly two processes. In the first process, the algorithm simulates how the groups of animals move from the current position to the new position. During this process, each individual should obey three main rules. In the latter process, the algorithm simulates how some animals leave the group and some join the group during the migration. In order to verify the performance of our approach, 23 benchmark functions are employed. The proposed method has been compared with other well-known heuristic search methods. Experimental results indicate that the proposed algorithm performs better than or at least comparable with state-of-the-art approaches from literature when considering the quality of the solution obtained.",
"title": ""
},
{
"docid": "96804634aa7c691aed1eae11d3e44591",
"text": "AIMS\nTo investigated the association between the ABO blood group and gestational diabetes mellitus (GDM).\n\n\nMATERIALS AND METHODS\nA retrospective case-control study was conducted using data from 5424 Japanese pregnancies. GDM screening was performed in the first trimester using a casual blood glucose test and in the second trimester using a 50-g glucose challenge test. If the screening was positive, a 75-g oral glucose tolerance test was performed for a GDM diagnosis, which was defined according to the International Association of Diabetes and Pregnancy Study Groups. Logistic regression was used to obtain the odds ratio (OR) and 95% confidence interval (CI) adjusted for traditional risk factors.\n\n\nRESULTS\nWomen with the A blood group (adjusted OR: 0.34, 95% CI: 0.19-0.63), B (adjusted OR: 0.35, 95% CI: 0.18-0.68), or O (adjusted OR: 0.39, 95% CI: 0.21-0.74) were at decreased risk of GDM compared with those with group AB. Women with the AB group were associated with increased risk of GDM as compared with those with A, B, or O (adjusted OR: 2.73, 95% CI: 1.64-4.57).\n\n\nCONCLUSION\nABO blood groups are associated with GDM, and group AB was a risk factor for GDM in Japanese population.",
"title": ""
},
{
"docid": "77df05c7e00485b66a1aacbab44847fb",
"text": "Study Objective: To determine the prevalence of vulvovaginitis, predisposing factors, microbial etiology and therapy in patients treated at the Hospital del Niño DIF, Pachuca, Hidalgo, Mexico. Design. This was an observational and descriptive study from 2006 to 2009. Setting: Hospital del Niño DIF, Pachuca, Hidalgo, Mexico. Participants. Patients from 0 to 16 years, with vulvovaginitis and/or vaginal discharge were included. Interventions: None. Main Outcome Measures: Demographic data, etiology, clinical features, risk factors and therapy were analyzed. Results: Four hundred twenty seven patients with diagnosis of vulvovaginitis were included. The average prevalence to 4 years in the study period was 0.19%. The age group most affected was schoolchildren (225 cases: 52.69%). The main signs and symptoms presented were leucorrhea (99.3%), vaginal hyperemia (32.6%), vulvar itching (32.1%) and erythema (28.8%). Identified risk factors were poor hygiene (15.7%), urinary tract infection (14.7%), intestinal parasites (5.6%) and obesity or overweight (3.3%). The main microorganisms found in vaginal cultures were enterobacteriaceae (Escherichia coli, Klebsiella and Enterococcus faecalis), Staphylococcus spp, and Gardnerella vaginalis. Several inconsistent were found in the drug prescription of the patients. Conclusion: Vulvovaginitis prevalence in Mexican girls is low and this was caused mainly by opportunist microorganisms. The initial treatment of vulvovaginitis must include hygienic measure and an antimicrobial according to the clinical features and microorganism found.",
"title": ""
},
{
"docid": "e99d7b425ab1a2a9a2de4e10a3fbe766",
"text": "In this paper, a review of the authors' work on inkjet-printed flexible antennas, fabricated on paper substrates, is given. This is presented as a system-level solution for ultra-low-cost mass production of UHF radio-frequency identification (RFID) tags and wireless sensor nodes (WSN), in an approach that could be easily extended to other microwave and wireless applications. First, we discuss the benefits of using paper as a substrate for high-frequency applications, reporting its very good electrical/dielectric performance up to at least 1 GHz. The RF characteristics of the paper-based substrate are studied by using a microstrip-ring resonator, in order to characterize the dielectric properties (dielectric constant and loss tangent). We then give details about the inkjet-printing technology, including the characterization of the conductive ink, which consists of nano-silver particles. We highlight the importance of this technology as a fast and simple fabrication technique, especially on flexible organic (e.g., LCP) or paper-based substrates. A compact inkjet-printed UHF ldquopassive RFIDrdquo antenna, using the classic T-match approach and designed to match the IC's complex impedance, is presented as a demonstration prototype for this technology. In addition, we briefly touch upon the state-of-the-art area of fully-integrated wireless sensor modules on paper. We show the first-ever two-dimensional sensor integration with an RFID tag module on paper, as well as the possibility of a three-dimensional multilayer paper-based RF/microwave structure.",
"title": ""
}
] | scidocsrr |
b47ad52c6259a7678a2215e570b97c72 | Stability of cyberbullying victimization among adolescents: Prevalence and association with bully-victim status and psychosocial adjustment | [
{
"docid": "7875910ad044232b4631ecacfec65656",
"text": "In this study, a questionnaire (Cyberbullying Questionnaire, CBQ) was developed to assess the prevalence of numerous modalities of cyberbullying (CB) in adolescents. The association of CB with the use of other forms of violence, exposure to violence, acceptance and rejection by peers was also examined. In the study, participants were 1431 adolescents, aged between 12 and17 years (726 girls and 682 boys). The adolescents responded to the CBQ, measures of reactive and proactive aggression, exposure to violence, justification of the use of violence, and perceived social support of peers. Sociometric measures were also used to assess the use of direct and relational aggression and the degree of acceptance and rejection by peers. The results revealed excellent psychometric properties for the CBQ. Of the adolescents, 44.1% responded affirmatively to at least one act of CB. Boys used CB to greater extent than girls. Lastly, CB was significantly associated with the use of proactive aggression, justification of violence, exposure to violence, and less perceived social support of friends. 2010 Elsevier Ltd. All rights reserved.",
"title": ""
}
] | [
{
"docid": "31ec7ef4e68950919054b59942d4dbfa",
"text": "A promising approach to learn to play board games is to use reinforcement learning algorithms that can learn a game position evaluation function. In this paper we examine and compare three different methods for generating training games: (1) Learning by self-play, (2) Learning by playing against an expert program, and (3) Learning from viewing experts play against themselves. Although the third possibility generates highquality games from the start compared to initial random games generated by self-play, the drawback is that the learning program is never allowed to test moves which it prefers. We compared these three methods using temporal difference methods to learn the game of backgammon. For particular games such as draughts and chess, learning from a large database containing games played by human experts has as a large advantage that during the generation of (useful) training games, no expensive lookahead planning is necessary for move selection. Experimental results in this paper show how useful this method is for learning to play chess and draughts.",
"title": ""
},
{
"docid": "c9f48010cdf39b4d024818f1bbb21307",
"text": "This paper proposes to use probabilistic model checking to synthesize optimal robot policies in multi-tasking autonomous systems that are subject to human-robot interaction. Given the convincing empirical evidence that human behavior can be related to reinforcement models, we take as input a well-studied Q-table model of the human behavior for flexible scenarios. We first describe an automated procedure to distill a Markov decision process (MDP) for the human in an arbitrary but fixed scenario. The distinctive issue is that – in contrast to existing models – under-specification of the human behavior is included. Probabilistic model checking is used to predict the human’s behavior. Finally, the MDP model is extended with a robot model. Optimal robot policies are synthesized by analyzing the resulting two-player stochastic game. Experimental results with a prototypical implementation using PRISM show promising results.",
"title": ""
},
{
"docid": "c5b2f22f1cc160b19fa689120c35c693",
"text": "Commodity depth cameras have created many interesting new applications in the research community recently. These applications often require the calibration information between the color and the depth cameras. Traditional checkerboard based calibration schemes fail to work well for the depth camera, since its corner features cannot be reliably detected in the depth image. In this paper, we present a maximum likelihood solution for the joint depth and color calibration based on two principles. First, in the depth image, points on the checker-board shall be co-planar, and the plane is known from color camera calibration. Second, additional point correspondences between the depth and color images may be manually specified or automatically established to help improve calibration accuracy. Uncertainty in depth values has been taken into account systematically. The proposed algorithm is reliable and accurate, as demonstrated by extensive experimental results on simulated and real-world examples.",
"title": ""
},
{
"docid": "3f8f835605b34d27802f6f2f0a363ae2",
"text": "*Correspondence: Enrico Di Minin, Finnish Centre of Excellence in Metapopulation Biology, Department of Biosciences, Biocenter 3, University of Helsinki, PO Box 65 (Viikinkaari 1), 00014 Helsinki, Finland; School of Life Sciences, Westville Campus, University of KwaZulu-Natal, PO Box 54001 (University Road), Durban 4000, South Africa [email protected]; Tuuli Toivonen, Finnish Centre of Excellence in Metapopulation Biology, Department of Biosciences, Biocenter 3, University of Helsinki, PO Box 65 (Viikinkaari 1), 00014 Helsinki, Finland; Department of Geosciences and Geography, University of Helsinki, PO Box 64 (Gustaf Hällströminkatu 2a), 00014 Helsinki, Finland [email protected] These authors have contributed equally to this work.",
"title": ""
},
{
"docid": "6087e066b04b9c3ac874f3c58979f89a",
"text": "What does it mean for a machine learning model to be ‘fair’, in terms which can be operationalised? Should fairness consist of ensuring everyone has an equal probability of obtaining some benefit, or should we aim instead to minimise the harms to the least advantaged? Can the relevant ideal be determined by reference to some alternative state of affairs in which a particular social pattern of discrimination does not exist? Various definitions proposed in recent literature make different assumptions about what terms like discrimination and fairness mean and how they can be defined in mathematical terms. Questions of discrimination, egalitarianism and justice are of significant interest to moral and political philosophers, who have expended significant efforts in formalising and defending these central concepts. It is therefore unsurprising that attempts to formalise ‘fairness’ in machine learning contain echoes of these old philosophical debates. This paper draws on existing work in moral and political philosophy in order to elucidate emerging debates about fair machine learning.",
"title": ""
},
{
"docid": "cb4518f95b82e553b698ae136362bd59",
"text": "Optimal control theory is a mature mathematical discipline with numerous applications in both science and engineering. It is emerging as the computational framework of choice for studying the neural control of movement, in much the same way that probabilistic inference is emerging as the computational framework of choice for studying sensory information processing. Despite the growing popularity of optimal control models, however, the elaborate mathematical machinery behind them is rarely exposed and the big picture is hard to grasp without reading a few technical books on the subject. While this chapter cannot replace such books, it aims to provide a self-contained mathematical introduction to optimal control theory that is su¢ ciently broad and yet su¢ ciently detailed when it comes to key concepts. The text is not tailored to the
eld of motor control (apart from the last section, and the overall emphasis on systems with continuous state) so it will hopefully be of interest to a wider audience. Of special interest in the context of this book is the material on the duality of optimal control and probabilistic inference; such duality suggests that neural information processing in sensory and motor areas may be more similar than currently thought. The chapter is organized in the following sections:",
"title": ""
},
{
"docid": "85016bc639027363932f9adf7012d7a7",
"text": "The output voltage ripple is one of the most significant system parameters in switch-mode power supplies. This ripple degrades the performance of application specific integrated circuits (ASICs). The most common way to reduce it is to use additional integrated low drop-out regulators (LDO) on the ASIC. This technique usually suffers from high system efficiency as it is required for portable electronic systems. It also increases the design challenges of on-chip power management circuits and area required for the LDOs. This work presents a low-power fully integrated 0.97mm2 DC-DC Buck converter with a tuned series LDO with 1mV voltage ripple in a 0.25μm BiCMOS process. The converter prodives a power supply rejection ratio of more than 60 dB from 1 to 6MHz and a load current range of 0...400 mA. A peak efficiency of 93.7% has been measured. For high light load efficiency, automatic mode operation is implemented. To decrease the form factor and costs, the external components count has been reduced to a single inductor of 1 μH and two external capacitors of 2 μF each.",
"title": ""
},
{
"docid": "1014a33211c9ca3448fa02cf734a5775",
"text": "We propose a general method called truncated gradient to induce sparsity in the weights of online learning algorithms with convex loss functions. This method has several essential properties: 1. The degree of sparsity is continuous a parameter controls the rate of sparsi cation from no sparsi cation to total sparsi cation. 2. The approach is theoretically motivated, and an instance of it can be regarded as an online counterpart of the popular L1-regularization method in the batch setting. We prove that small rates of sparsi cation result in only small additional regret with respect to typical online learning guarantees. 3. The approach works well empirically. We apply the approach to several datasets and nd that for datasets with large numbers of features, substantial sparsity is discoverable.",
"title": ""
},
{
"docid": "98d23862436d8ff4d033cfd48692c84d",
"text": "Memory corruption vulnerabilities are the root cause of many modern attacks. Existing defense mechanisms are inadequate; in general, the software-based approaches are not efficient and the hardware-based approaches are not flexible. In this paper, we present hardware-assisted data-flow isolation, or, HDFI, a new fine-grained data isolation mechanism that is broadly applicable and very efficient. HDFI enforces isolation at the machine word granularity by virtually extending each memory unit with an additional tag that is defined by dataflow. This capability allows HDFI to enforce a variety of security models such as the Biba Integrity Model and the Bell -- LaPadula Model. We implemented HDFI by extending the RISC-V instruction set architecture (ISA) and instantiating it on the Xilinx Zynq ZC706 evaluation board. We ran several benchmarks including the SPEC CINT 2000 benchmark suite. Evaluation results show that the performance overhead caused by our modification to the hardware is low (<; 2%). We also developed or ported several security mechanisms to leverage HDFI, including stack protection, standard library enhancement, virtual function table protection, code pointer protection, kernel data protection, and information leak prevention. Our results show that HDFI is easy to use, imposes low performance overhead, and allows us to create more elegant and more secure solutions.",
"title": ""
},
{
"docid": "6384a691d3b50e252ab76a61e28f012e",
"text": "We study the algorithmics of information structure design --- a.k.a. persuasion or signaling --- in a fundamental special case introduced by Arieli and Babichenko: multiple agents, binary actions, and no inter-agent externalities. Unlike prior work on this model, we allow many states of nature. We assume that the principal's objective is a monotone set function, and study the problem both in the public signal and private signal models, drawing a sharp contrast between the two in terms of both efficacy and computational complexity.\n When private signals are allowed, our results are largely positive and quite general. First, we use linear programming duality and the equivalence of separation and optimization to show polynomial-time equivalence between (exactly) optimal signaling and the problem of maximizing the objective function plus an additive function. This yields an efficient implementation of the optimal scheme when the objective is supermodular or anonymous. Second, we exhibit a (1-1/e)-approximation of the optimal private signaling scheme, modulo an additive loss of ε, when the objective function is submodular. These two results simplify, unify, and generalize results of [Arieli and Babichenko, 2016] and [Babichenko and Barman, 2016], extending them from a binary state of nature to many states (modulo the additive loss in the latter result). Third, we consider the binary-state case with a submodular objective, and simplify and slightly strengthen the result of [Babichenko and Barman, 2016] to obtain a (1-1/e)-approximation via a scheme which (i) signals independently to each receiver and (ii) is \"oblivious\" in that it does not depend on the objective function so long as it is monotone submodular.\n When only a public signal is allowed, our results are negative. First, we show that it is NP-hard to approximate the optimal public scheme, within any constant factor, even when the objective is additive. Second, we show that the optimal private scheme can outperform the optimal public scheme, in terms of maximizing the sender's objective, by a polynomial factor.",
"title": ""
},
{
"docid": "5b021c0223ee25535508eb1d6f63ff55",
"text": "A 32-KB standard CMOS antifuse one-time programmable (OTP) ROM embedded in a 16-bit microcontroller as its program memory is designed and implemented in 0.18-mum standard CMOS technology. The proposed 32-KB OTP ROM cell array consists of 4.2 mum2 three-transistor (3T) OTP cells where each cell utilizes a thin gate-oxide antifuse, a high-voltage blocking transistor, and an access transistor, which are all compatible with standard CMOS process. In order for high density implementation, the size of the 3T cell has been reduced by 80% in comparison to previous work. The fabricated total chip size, including 32-KB OTP ROM, which can be programmed via external I 2C master device such as universal I2C serial EEPROM programmer, 16-bit microcontroller with 16-KB program SRAM and 8-KB data SRAM, peripheral circuits to interface other system building blocks, and bonding pads, is 9.9 mm2. This paper describes the cell, design, and implementation of high-density CMOS OTP ROM, and shows its promising possibilities in embedded applications",
"title": ""
},
{
"docid": "ac6430e097fb5a7dc1f7864f283dcf47",
"text": "In the task of Object Recognition, there exists a dichotomy between the categorization of objects and estimating object pose, where the former necessitates a view-invariant representation, while the latter requires a representation capable of capturing pose information over different categories of objects. With the rise of deep architectures, the prime focus has been on object category recognition. Deep learning methods have achieved wide success in this task. In contrast, object pose regression using these approaches has received relatively much less attention. In this paper we show how deep architectures, specifically Convolutional Neural Networks (CNN), can be adapted to the task of simultaneous categorization and pose estimation of objects. We investigate and analyze the layers of various CNN models and extensively compare between them with the goal of discovering how the layers of distributed representations of CNNs represent object pose information and how this contradicts object category representations. We extensively experiment on two recent large and challenging multi-view datasets. Our models achieve better than state-of-the-art performance on both datasets.",
"title": ""
},
{
"docid": "a4f0b524f79db389c72abd27d36f8944",
"text": "In order to summarize the status of rescue robotics, this chapter will cover the basic characteristics of disasters and their impact on robotic design, describe the robots actually used in disasters to date, promising robot designs (e.g., snakes, legged locomotion) and concepts (e.g., robot teams or swarms, sensor networks), methods of evaluation in benchmarks for rescue robotics, and conclude with a discussion of the fundamental problems and open issues facing rescue robotics, and their evolution from an interesting idea to widespread adoption. The Chapter will concentrate on the rescue phase, not recovery, with the understanding that capabilities for rescue can be applied to, and extended for, the recovery phase. The use of robots in the prevention and preparedness phases of disaster management are outside the scope of this chapter.",
"title": ""
},
{
"docid": "5a9113dc952bb51faf40d242e91db09c",
"text": "This study highlights the changes in lycopene and β-carotene retention in tomato juice subjected to combined pressure-temperature (P-T) treatments ((high-pressure processing (HPP; 500-700 MPa, 30 °C), pressure-assisted thermal processing (PATP; 500-700 MPa, 100 °C), and thermal processing (TP; 0.1 MPa, 100 °C)) for up to 10 min. Processing treatments utilized raw (untreated) and hot break (∼93 °C, 60 s) tomato juice as controls. Changes in bioaccessibility of these carotenoids as a result of processing were also studied. Microscopy was applied to better understand processing-induced microscopic changes. TP did not alter the lycopene content of the tomato juice. HPP and PATP treatments resulted in up to 12% increases in lycopene extractability. all-trans-β-Carotene showed significant degradation (p < 0.05) as a function of pressure, temperature, and time. Its retention in processed samples varied between 60 and 95% of levels originally present in the control. Regardless of the processing conditions used, <0.5% lycopene appeared in the form of micelles (<0.5% bioaccessibility). Electron microscopy images showed more prominent lycopene crystals in HPP and PATP processed juice than in thermally processed juice. However, lycopene crystals did appear to be enveloped regardless of the processing conditions used. The processed juice (HPP, PATP, TP) showed significantly higher (p < 0.05) all-trans-β-carotene micellarization as compared to the raw unprocessed juice (control). Interestingly, hot break juice subjected to combined P-T treatments showed 15-30% more all-trans-β-carotene micellarization than the raw juice subjected to combined P-T treatments. This study demonstrates that combined pressure-heat treatments increase lycopene extractability. However, the in vitro bioaccessibility of carotenoids was not significantly different among the treatments (TP, PATP, HPP) investigated.",
"title": ""
},
{
"docid": "47afea1e95f86bb44a1cf11e020828fc",
"text": "Document clustering is generally the first step for topic identification. Since many clustering methods operate on the similarities between documents, it is important to build representations of these documents which keep their semantics as much as possible and are also suitable for efficient similarity calculation. As we describe in Koopman et al. (Proceedings of ISSI 2015 Istanbul: 15th International Society of Scientometrics and Informetrics Conference, Istanbul, Turkey, 29 June to 3 July, 2015. Bogaziçi University Printhouse. http://www.issi2015.org/files/downloads/all-papers/1042.pdf , 2015), the metadata of articles in the Astro dataset contribute to a semantic matrix, which uses a vector space to capture the semantics of entities derived from these articles and consequently supports the contextual exploration of these entities in LittleAriadne. However, this semantic matrix does not allow to calculate similarities between articles directly. In this paper, we will describe in detail how we build a semantic representation for an article from the entities that are associated with it. Base on such semantic representations of articles, we apply two standard clustering methods, K-Means and the Louvain community detection algorithm, which leads to our two clustering solutions labelled as OCLC-31 (standing for K-Means) and OCLC-Louvain (standing for Louvain). In this paper, we will give the implementation details and a basic comparison with other clustering solutions that are reported in this special issue.",
"title": ""
},
{
"docid": "45a45087a6829486d46eda0adcff978f",
"text": "Container technology has the potential to considerably simplify the management of the software stack of High Performance Computing (HPC) clusters. However, poor integration with established HPC technologies is still preventing users and administrators to reap the benefits of containers. Message Passing Interface (MPI) is a pervasive technology used to run scientific software, often written in Fortran and C/C++, that presents challenges for effective integration with containers. This work shows how an existing MPI implementation can be extended to improve this integration.",
"title": ""
},
{
"docid": "e5ce1ddd50a728fab41043324938a554",
"text": "B-trees are used by many file systems to represent files and directories. They provide guaranteed logarithmic time key-search, insert, and remove. File systems like WAFL and ZFS use shadowing, or copy-on-write, to implement snapshots, crash recovery, write-batching, and RAID. Serious difficulties arise when trying to use b-trees and shadowing in a single system.\n This article is about a set of b-tree algorithms that respects shadowing, achieves good concurrency, and implements cloning (writeable snapshots). Our cloning algorithm is efficient and allows the creation of a large number of clones.\n We believe that using our b-trees would allow shadowing file systems to better scale their on-disk data structures.",
"title": ""
},
{
"docid": "f10294ed332670587cf9c100f2d75428",
"text": "In ancient times, people exchanged their goods and services to obtain what they needed (such as clothes and tools) from other people. This system of bartering compensated for the lack of currency. People offered goods/services and received in kind other goods/services. Now, despite the existence of multiple currencies and the progress of humanity from the Stone Age to the Byte Age, people still barter but in a different way. Mainly, people use money to pay for the goods they purchase and the services they obtain.",
"title": ""
},
{
"docid": "bf3450649fdf5d5bb4ee89fbaf7ec0ff",
"text": "In this study, we propose a research model to assess the effect of a mobile health (mHealth) app on exercise motivation and physical activity of individuals based on the design and self-determination theory. The research model is formulated from the perspective of motivation affordance and gamification. We will discuss how the use of specific gamified features of the mHealth app can trigger/afford corresponding users’ exercise motivations, which further enhance users’ participation in physical activity. We propose two hypotheses to test the research model using a field experiment. We adopt a 3-phase longitudinal approach to collect data in three different time zones, in consistence with approach commonly adopted in psychology and physical activity research, so as to reduce the common method bias in testing the two hypotheses.",
"title": ""
}
] | scidocsrr |
f25a8834dab8ee8f17dcef2f09d5c613 | A Tutorial on Deep Learning for Music Information Retrieval | [
{
"docid": "e5ec413c71f8f4012a94e20f7a575e68",
"text": "It is clear that the learning speed of feedforward neural networks is in general far slower than required and it has been a major bottleneck in their applications for past decades. Two key reasons behind may be: 1) the slow gradient-based learning algorithms are extensively used to train neural networks, and 2) all the parameters of the networks are tuned iteratively by using such learning algorithms. Unlike these traditional implementations, this paper proposes a new learning algorithm called extreme learning machine (ELM) for single-hidden layer feedforward neural networks (SLFNs) which randomly chooses the input weights and analytically determines the output weights of SLFNs. In theory, this algorithm tends to provide the best generalization performance at extremely fast learning speed. The experimental results based on real-world benchmarking function approximation and classification problems including large complex applications show that the new algorithm can produce best generalization performance in some cases and can learn much faster than traditional popular learning algorithms for feedforward neural networks.",
"title": ""
},
{
"docid": "e4197f2d23fdbec9af85954c40ca46da",
"text": "In this work we investigate the applicability of unsupervised feature learning methods to the task of automatic genre prediction of music pieces. More specifically we evaluate a framework that recently has been successfully used to recognize objects in images. We first extract local patches from the time-frequency transformed audio signal, which are then pre-processed and used for unsupervised learning of an overcomplete dictionary of local features. For learning we either use a bootstrapped k-means clustering approach or select features randomly. We further extract feature responses in a convolutional manner and train a linear SVM for classification. We extensively evaluate the approach on the GTZAN dataset, emphasizing the influence of important design choices such as dimensionality reduction, pooling and patch dimension on the classification accuracy. We show that convolutional extraction of local feature responses is crucial to reach high performance. Furthermore we find that using this approach, simple and fast learning techniques such as k-means or randomly selected features are competitive with previously published results which also learn features from audio signals.",
"title": ""
}
] | [
{
"docid": "66b104459bdfc063cf7559c363c5802f",
"text": "We present a new local strategy to solve incremental learning tasks. Applied to Support Vector Machines based on local kernel, it allows to avoid re-learning of all the parameters by selecting a working subset where the incremental learning is performed. Automatic selection procedure is based on the estimation of generalization error by using theoretical bounds that involve the margin notion. Experimental simulation on three typical datasets of machine learning give promising results.",
"title": ""
},
{
"docid": "0188eb4ef8a87b6cee8657018360fa69",
"text": "This paper presents a pattern division multiple access (PDMA) concept for cellular future radio access (FRA) towards the 2020s information society. Different from the current LTE radio access scheme (until Release 11), PDMA is a novel non-orthogonal multiple access technology based on the total optimization of multiple user communication system. It considers joint design from both transmitter and receiver. At the receiver, multiple users are detected by successive interference cancellation (SIC) detection method. Numerical results show that the PDMA system based on SIC improve the average sum rate of users over the orthogonal system with affordable complexity.",
"title": ""
},
{
"docid": "cab673895969ded614a4063d19777f4d",
"text": "Functional magnetic resonance imaging was used to assess the cortical areas active during the observation of mouth actions performed by humans and by individuals belonging to other species (monkey and dog). Two types of actions were presented: biting and oral communicative actions (speech reading, lip-smacking, barking). As a control, static images of the same actions were shown. Observation of biting, regardless of the species of the individual performing the action, determined two activation foci (one rostral and one caudal) in the inferior parietal lobule and an activation of the pars opercularis of the inferior frontal gyrus and the adjacent ventral premotor cortex. The left rostral parietal focus (possibly BA 40) and the left premotor focus were very similar in all three conditions, while the right side foci were stronger during the observation of actions made by conspecifics. The observation of speech reading activated the left pars opercularis of the inferior frontal gyrus, the observation of lip-smacking activated a small focus in the pars opercularis bilaterally, and the observation of barking did not produce any activation in the frontal lobe. Observation of all types of mouth actions induced activation of extrastriate occipital areas. These results suggest that actions made by other individuals may be recognized through different mechanisms. Actions belonging to the motor repertoire of the observer (e.g., biting and speech reading) are mapped on the observer's motor system. Actions that do not belong to this repertoire (e.g., barking) are essentially recognized based on their visual properties. We propose that when the motor representation of the observed action is activated, the observer gains knowledge of the observed action in a personal perspective, while this perspective is lacking when there is no motor activation.",
"title": ""
},
{
"docid": "f6ae71fee81a8560f37cb0dccfd1e3cd",
"text": "Linguistic research to date has determined many of the principles that govern the structure of the spatial schemas represented by closed-class forms across the world’s languages. contributing to this cumulative understanding have, for example, been Gruber 1965, Fillmore 1968, Leech 1969, Clark 1973, Bennett 1975, Herskovits 1982, Jackendoff 1983, Zubin and Svorou 1984, as well as myself, Talmy 1983, 2000a, 2000b). It is now feasible to integrate these principles and to determine the comprehensive system they belong to for spatial structuring in spoken language. The finding here is that this system has three main parts: the componential, the compositional, and the augmentive.",
"title": ""
},
{
"docid": "6e82e635682cf87a84463f01c01a1d33",
"text": "Finger veins have been proved to be an effective biometric for personal identification in the recent years. However, finger vein images are easily affected by influences such as image translation, orientation, scale, scattering, finger structure, complicated background, uneven illumination, and collection posture. All these factors may contribute to inaccurate region of interest (ROI) definition, and so degrade the performance of finger vein identification system. To improve this problem, in this paper, we propose a finger vein ROI localization method that has high effectiveness and robustness against the above factors. The proposed method consists of a set of steps to localize ROIs accurately, namely segmentation, orientation correction, and ROI detection. Accurate finger region segmentation and correct calculated orientation can support each other to produce higher accuracy in localizing ROIs. Extensive experiments have been performed on the finger vein image database, MMCBNU_6000, to verify the robustness of the proposed method. The proposed method shows the segmentation accuracy of 100%. Furthermore, the average processing time of the proposed method is 22 ms for an acquired image, which satisfies the criterion of a real-time finger vein identification system.",
"title": ""
},
{
"docid": "67dedca1dbdf5845b32c74e17fc42eb6",
"text": "How much trust a user places in a recommender is crucial to the uptake of the recommendations. Although prior work established various factors that build and sustain user trust, their comparative impact has not been studied in depth. This paper presents the results of a crowdsourced study examining the impact of various recommendation interfaces and content selection strategies on user trust. It evaluates the subjective ranking of nine key factors of trust grouped into three dimensions and examines the differences observed with respect to users' personality traits.",
"title": ""
},
{
"docid": "5cd3abebf4d990bb9196b7019b29c568",
"text": "Wearing comfort of clothing is dependent on air permeability, moisture absorbency and wicking properties of fabric, which are related to the porosity of fabric. In this work, a plug-in is developed using Python script and incorporated in Abaqus/CAE for the prediction of porosity of plain weft knitted fabrics. The Plug-in is able to automatically generate 3D solid and multifilament weft knitted fabric models and accurately determine the porosity of fabrics in two steps. In this work, plain weft knitted fabrics made of monofilament, multifilament and spun yarn made of staple fibers were used to evaluate the effectiveness of the developed plug-in. In the case of staple fiber yarn, intra yarn porosity was considered in the calculation of porosity. The first step is to develop a 3D geometrical model of plain weft knitted fabric and the second step is to calculate the porosity of the fabric by using the geometrical parameter of 3D weft knitted fabric model generated in step one. The predicted porosity of plain weft knitted fabric is extracted in the second step and is displayed in the message area. The predicted results obtained from the plug-in have been compared with the experimental results obtained from previously developed models; they agreed well.",
"title": ""
},
{
"docid": "bd3f7e8e4416f67cb6e26ce0575af624",
"text": "Soft materials are being adopted in robotics in order to facilitate biomedical applications and in order to achieve simpler and more capable robots. One route to simplification is to design the robot's body using `smart materials' that carry the burden of control and actuation. Metamaterials enable just such rational design of the material properties. Here we present a soft robot that exploits mechanical metamaterials for the intrinsic synchronization of two passive clutches which contact its travel surface. Doing so allows it to move through an enclosed passage with an inchworm motion propelled by a single actuator. Our soft robot consists of two 3D-printed metamaterials that implement auxetic and normal elastic properties. The design, fabrication and characterization of the metamaterials are described. In addition, a working soft robot is presented. Since the synchronization mechanism is a feature of the robot's material body, we believe that the proposed design will enable compliant and robust implementations that scale well with miniaturization.",
"title": ""
},
{
"docid": "a2d699f3c600743c732b26071639038a",
"text": "A novel rectifying circuit topology is proposed for converting electromagnetic pulse waves (PWs), that are collected by a wideband antenna, into dc voltage. The typical incident signal considered in this paper consists of 10-ns pulses modulated around 2.4 GHz with a repetition period of 100 ns. The proposed rectifying circuit topology comprises a double-current architecture with inductances that collect the energy during the pulse delivery as well as an output capacitance that maintains the dc output voltage between the pulses. Experimental results show that the efficiency of the rectifier reaches 64% for a mean available incident power of 4 dBm. Similar performances are achieved when a wideband antenna is combined with the rectifier in order to realize a rectenna. By increasing the repetition period of the incident PWs to 400 ns, the rectifier still operates with an efficiency of 52% for a mean available incident pulse power of −8 dBm. Finally, the proposed PW rectenna is tested for a wireless energy transmission application in a low- $Q$ cavity. The time reversal technique is applied to focus PWs around the desired rectenna. Results show that the rectenna is still efficient when noisy PW is handled.",
"title": ""
},
{
"docid": "829f94e5e649d9b3501953e6b418bc11",
"text": "Most modern hypervisors offer powerful resource control primitives such as reservations, limits, and shares for individual virtual machines (VMs). These primitives provide a means to dynamic vertical scaling of VMs in order for the virtual applications to meet their respective service level objectives (SLOs). VMware DRS offers an additional resource abstraction of a resource pool (RP) as a logical container representing an aggregate resource allocation for a collection of VMs. In spite of the abundant research on translating application performance goals to resource requirements, the implementation of VM vertical scaling techniques in commercial products remains limited. In addition, no prior research has studied automatic adjustment of resource control settings at the resource pool level. In this paper, we present AppRM, a tool that automatically sets resource controls for both virtual machines and resource pools to meet application SLOs. AppRM contains a hierarchy of virtual application managers and resource pool managers. At the application level, AppRM translates performance objectives into the appropriate resource control settings for the individual VMs running that application. At the resource pool level, AppRM ensures that all important applications within the resource pool can meet their performance targets by adjusting controls at the resource pool level. Experimental results under a variety of dynamically changing workloads composed by multi-tiered applications demonstrate the effectiveness of AppRM. In all cases, AppRM is able to deliver application performance satisfaction without manual intervention.",
"title": ""
},
{
"docid": "4e19a7342ff32f82bc743f40b3395ee3",
"text": "The face image is the most accessible biometric modality which is used for highly accurate face recognition systems, while it is vulnerable to many different types of presentation attacks. Face anti-spoofing is a very critical step before feeding the face image to biometric systems. In this paper, we propose a novel two-stream CNN-based approach for face anti-spoofing, by extracting the local features and holistic depth maps from the face images. The local features facilitate CNN to discriminate the spoof patches independent of the spatial face areas. On the other hand, holistic depth map examine whether the input image has a face-like depth. Extensive experiments are conducted on the challenging databases (CASIA-FASD, MSU-USSA, and Replay Attack), with comparison to the state of the art.",
"title": ""
},
{
"docid": "3b549ddb51daba4fa5a0db8fa281ff7e",
"text": "We propose a method for learning from streaming visual data using a compact, constant size representation of all the data that was seen until a given moment. Specifically, we construct a “coreset” representation of streaming data using a parallelized algorithm, which is an approximation of a set with relation to the squared distances between this set and all other points in its ambient space. We learn an adaptive object appearance model from the coreset tree in constant time and logarithmic space and use it for object tracking by detection. Our method obtains excellent results for object tracking on three standard datasets over more than 100 videos. The ability to summarize data efficiently makes our method ideally suited for tracking in long videos in presence of space and time constraints. We demonstrate this ability by outperforming a variety of algorithms on the TLD dataset with 2685 frames on average. This coreset based learning approach can be applied for both real-time learning of small, varied data and fast learning of big data.",
"title": ""
},
{
"docid": "6974bf94292b51fc4efd699c28c90003",
"text": "We just released an Open Source receiver that is able to decode IEEE 802.11a/g/p Orthogonal Frequency Division Multiplexing (OFDM) frames in software. This is the first Software Defined Radio (SDR) based OFDM receiver supporting channel bandwidths up to 20MHz that is not relying on additional FPGA code. Our receiver comprises all layers from the physical up to decoding the MAC packet and extracting the payload of IEEE 802.11a/g/p frames. In our demonstration, visitors can interact live with the receiver while it is decoding frames that are sent over the air. The impact of moving the antennas and changing the settings are displayed live in time and frequency domain. Furthermore, the decoded frames are fed to Wireshark where the WiFi traffic can be further investigated. It is possible to access and visualize the data in every decoding step from the raw samples, the autocorrelation used for frame detection, the subcarriers before and after equalization, up to the decoded MAC packets. The receiver is completely Open Source and represents one step towards experimental research with SDR.",
"title": ""
},
{
"docid": "e5f90c30d546fe22a25305afefeaff8c",
"text": "H2O2 has been found to be required for the activity of the main microbial enzymes responsible for lignin oxidative cleavage, peroxidases. Along with other small radicals, it is implicated in the early attack of plant biomass by fungi. Among the few extracellular H2O2-generating enzymes known are the glyoxal oxidases (GLOX). GLOX is a copper-containing enzyme, sharing high similarity at the level of active site structure and chemistry with galactose oxidase. Genes encoding GLOX enzymes are widely distributed among wood-degrading fungi especially white-rot degraders, plant pathogenic and symbiotic fungi. GLOX has also been identified in plants. Although widely distributed, only few examples of characterized GLOX exist. The first characterized fungal GLOX was isolated from Phanerochaete chrysosporium. The GLOX from Utilago maydis has a role in filamentous growth and pathogenicity. More recently, two other glyoxal oxidases from the fungus Pycnoporus cinnabarinus were also characterized. In plants, GLOX from Vitis pseudoreticulata was found to be implicated in grapevine defence mechanisms. Fungal GLOX were found to be activated by peroxidases in vitro suggesting a synergistic and regulatory relationship between these enzymes. The substrates oxidized by GLOX are mainly aldehydes generated during lignin and carbohydrates degradation. The reactions catalysed by this enzyme such as the oxidation of toxic molecules and the production of valuable compounds (organic acids) makes GLOX a promising target for biotechnological applications. This aspect on GLOX remains new and needs to be investigated.",
"title": ""
},
{
"docid": "e2ee26af1fb425f8591b5b8689080fff",
"text": "In this paper, we focus on a recent Web trend called microblogging, and in particular a site called Twitter. The content of such a site is an extraordinarily large number of small textual messages, posted by millions of users, at random or in response to perceived events or situations. We have developed an algorithm that takes a trending phrase or any phrase specified by a user, collects a large number of posts containing the phrase, and provides an automatically created summary of the posts related to the term. We present examples of summaries we produce along with initial evaluation.",
"title": ""
},
{
"docid": "6a1ade9670c8ee161209d54901318692",
"text": "The motion of a plane can be described by a homography. We study how to parameterize homographies to maximize plane estimation performance. We compare the usual 3 × 3 matrix parameterization with a parameterization that combines 4 fixed points in one of the images with 4 variable points in the other image. We empirically show that this 4pt parameterization is far superior. We also compare both parameterizations with a variety of direct parameterizations. In the case of unknown relative orientation, we compare with a direct parameterization of the plane equation, and the rotation and translation of the camera(s). We show that the direct parameteri-zation is both less accurate and far less robust than the 4-point parameterization. We explain the poor performance using a measure of independence of the Jacobian images. In the fully calibrated setting, the direct parameterization just consists of 3 parameters of the plane equation. We show that this parameterization is far more robust than the 4-point parameterization, but only approximately as accurate. In the case of a moving stereo rig we find that the direct parameterization of plane equation, camera rotation and translation performs very well, both in terms of accuracy and robustness. This is in contrast to the corresponding direct parameterization in the case of unknown relative orientation. Finally, we illustrate the use of plane estimation in 2 automotive applications.",
"title": ""
},
{
"docid": "8283789e148f6e84f7901dc2a6ad0550",
"text": "A physical map has been constructed of the human genome containing 15,086 sequence-tagged sites (STSs), with an average spacing of 199 kilobases. The project involved assembly of a radiation hybrid map of the human genome containing 6193 loci and incorporated a genetic linkage map of the human genome containing 5264 loci. This information was combined with the results of STS-content screening of 10,850 loci against a yeast artificial chromosome library to produce an integrated map, anchored by the radiation hybrid and genetic maps. The map provides radiation hybrid coverage of 99 percent and physical coverage of 94 percent of the human genome. The map also represents an early step in an international project to generate a transcript map of the human genome, with more than 3235 expressed sequences localized. The STSs in the map provide a scaffold for initiating large-scale sequencing of the human genome.",
"title": ""
},
{
"docid": "2b952c455c9f8daa7f6c0c024620aef8",
"text": "Broadband use is booming around the globe as the infrastructure is built to provide high speed Internet and Internet Protocol television (IPTV) services. Driven by fierce competition and the search for increasing average revenue per user (ARPU), operators are evolving so they can deliver services within the home that involve a wide range of technologies, terminals, and appliances, as well as software that is increasingly rich and complex. “It should all work” is the key theme on the end user's mind, yet call centers are confronted with a multitude of consumer problems. The demarcation point between provider network and home network is blurring, in fact, if not yet in the consumer's mind. In this context, operators need to significantly rethink service lifecycle management. This paper explains how home and access support systems cover the most critical part of the network in service delivery. They build upon the inherent operation support features of access multiplexers, network termination devices, and home devices to allow the planning, fulfillment, operation, and assurance of new services.",
"title": ""
},
{
"docid": "8baa6af3ee08029f0a555e4f4db4e218",
"text": "We introduce several probabilistic models for learning the lexicon of a semantic parser. Lexicon learning is the first step of training a semantic parser for a new application domain and the quality of the learned lexicon significantly affects both the accuracy and efficiency of the final semantic parser. Existing work on lexicon learning has focused on heuristic methods that lack convergence guarantees and require significant human input in the form of lexicon templates or annotated logical forms. In contrast, our probabilistic models are trained directly from question/answer pairs using EM and our simplest model has a concave objective that guarantees convergence to a global optimum. An experimental evaluation on a set of 4th grade science questions demonstrates that our models improve semantic parser accuracy (35-70% error reduction) and efficiency (4-25x more sentences per second) relative to prior work despite using less human input. Our models also obtain competitive results on GEO880 without any datasetspecific engineering.",
"title": ""
},
{
"docid": "b51ab8520c29aa6b2ceaa79e9dda21b5",
"text": "This paper presents a new nanolubricant for the intermediate gearbox of the Apache aircraft. Historically, the intermediate gearbox has been prone for grease leaking and this natural-occurring fault has negatively impacted the airworthiness of the aircraft. In this study, the incorporation of graphite nanoparticles in mobile aviation gear oil is presented as a nanofluid with excellent thermo-physical properties. Condition-based maintenance practices are demonstrated where four nanoparticle additive oil samples with different concentrations are tested in a full-scale tail rotor drive-train test stand, in addition to, a baseline sample for comparison purposes. Different condition monitoring results suggest the capacity of the nanofluids to have significant gearbox performance benefits when compared to the base oil.",
"title": ""
}
] | scidocsrr |
a05a6184d933b9ebb2532954976fe785 | Word2Vec and Doc2Vec in Unsupervised Sentiment Analysis of Clinical Discharge Summaries | [
{
"docid": "cfbf63d92dfafe4ac0243acdff6cf562",
"text": "In this paper we present a linguistic resource for the lexical representation of affective knowledge. This resource (named W ORDNETAFFECT) was developed starting from W ORDNET, through a selection and tagging of a subset of synsets representing the affective",
"title": ""
},
{
"docid": "6b693af5ed67feab686a9a92e4329c94",
"text": "Physicians and nurses express their judgments and observations towards a patient’s health status in clinical narratives. Thus, their judgments are explicitly or implicitly included in patient records. To get impressions on the current health situation of a patient or on changes in the status, analysis and retrieval of this subjective content is crucial. In this paper, we approach this question as sentiment analysis problem and analyze the feasibility of assessing these judgments in clinical text by means of general sentiment analysis methods. Specifically, the word usage in clinical narratives and in a general text corpus is compared. The linguistic characteristics of judgments in clinical narratives are collected. Besides, the requirements for sentiment analysis and retrieval from clinical narratives are derived.",
"title": ""
}
] | [
{
"docid": "5394ca3d404c23a03bb123070855bf3c",
"text": "UNLABELLED\nA previously characterized rice hull smoke extract (RHSE) was tested for bactericidal activity against Salmonella Typhimurium using the disc-diffusion method. The minimum inhibitory concentration (MIC) value of RHSE was 0.822% (v/v). The in vivo antibacterial activity of RHSE (1.0%, v/v) was also examined in a Salmonella-infected Balb/c mouse model. Mice infected with a sublethal dose of the pathogens were administered intraperitoneally a 1.0% solution of RHSE at four 12-h intervals during the 48-h experimental period. The results showed that RHSE inhibited bacterial growth by 59.4%, 51.4%, 39.6%, and 28.3% compared to 78.7%, 64.6%, 59.2%, and 43.2% inhibition with the medicinal antibiotic vancomycin (20 mg/mL). By contrast, 4 consecutive administrations at 12-h intervals elicited the most effective antibacterial effect of 75.0% and 85.5% growth reduction of the bacteria by RHSE and vancomycin, respectively. The combination of RHSE and vancomycin acted synergistically against the pathogen. The inclusion of RHSE (1.0% v/w) as part of a standard mouse diet fed for 2 wk decreased mortality of 10 mice infected with lethal doses of the Salmonella. Photomicrographs of histological changes in liver tissues show that RHSE also protected the liver against Salmonella-induced pathological necrosis lesions. These beneficial results suggest that the RHSE has the potential to complement wood-derived smokes as antimicrobial flavor formulations for application to human foods and animal feeds.\n\n\nPRACTICAL APPLICATION\nThe new antimicrobial and anti-inflammatory rice hull derived liquid smoke has the potential to complement widely used wood-derived liquid smokes as an antimicrobial flavor and health-promoting formulation for application to foods.",
"title": ""
},
{
"docid": "923eee773a2953468bfd5876e0393d4d",
"text": "Latent variable time-series models are among the most heavily used tools from machine learning and applied statistics. These models have the advantage of learning latent structure both from noisy observations and from the temporal ordering in the data, where it is assumed that meaningful correlation structure exists across time. A few highly-structured models, such as the linear dynamical system with linear-Gaussian observations, have closed-form inference procedures (e.g. the Kalman Filter), but this case is an exception to the general rule that exact posterior inference in more complex generative models is intractable. Consequently, much work in time-series modeling focuses on approximate inference procedures for one particular class of models. Here, we extend recent developments in stochastic variational inference to develop a ‘black-box’ approximate inference technique for latent variable models with latent dynamical structure. We propose a structured Gaussian variational approximate posterior that carries the same intuition as the standard Kalman filter-smoother but, importantly, permits us to use the same inference approach to approximate the posterior of much more general, nonlinear latent variable generative models. We show that our approach recovers accurate estimates in the case of basic models with closed-form posteriors, and more interestingly performs well in comparison to variational approaches that were designed in a bespoke fashion for specific non-conjugate models.",
"title": ""
},
{
"docid": "6c15e15bddca3cf7a197eec0cf560448",
"text": "Enterprises and service providers are increasingly looking to global service delivery as a means for containing costs while improving the quality of service delivery. However, it is often difficult to effectively manage the conflicting needs associated with dynamic customer workload, strict service level constraints, and efficient service personnel organization. In this paper we propose a dynamic approach for workload and personnel management, where organization of personnel is dynamically adjusted based upon differences between observed and target service level metrics. Our approach consists of constructing a dynamic service delivery organization and developing a feedback control mechanism for dynamic workload management. We demonstrate the effectiveness of the proposed approach in an IT incident management example designed based on a large service delivery environment handling more than ten thousand service requests over a period of six months.",
"title": ""
},
{
"docid": "b899a5effd239f1548128786d5ae3a8f",
"text": "As fault diagnosis and prognosis systems in aerospace applications become more capable, the ability to utilize information supplied by them becomes increasingly important. While certain types of vehicle health data can be effectively processed and acted upon by crew or support personnel, others, due to their complexity or time constraints, require either automated or semi-automated reasoning. Prognostics-enabled Decision Making (PDM) is an emerging research area that aims to integrate prognostic health information and knowledge about the future operating conditions into the process of selecting subsequent actions for the system. The newly developed PDM algorithms require suitable software and hardware platforms for testing under realistic fault scenarios. The paper describes the development of such a platform, based on the K11 planetary rover prototype. A variety of injectable fault modes are being investigated for electrical, mechanical, and power subsystems of the testbed, along with methods for data collection and processing. In addition to the hardware platform, a software simulator with matching capabilities has been developed. The simulator allows for prototyping and initial validation of the algorithms prior to their deployment on the K11. The simulator is also available to the PDM algorithms to assist with the reasoning process. A reference set of diagnostic, prognostic, and decision making algorithms is also described, followed by an overview of the current test scenarios and the results of their execution on the simulator. Edward Balaban et.al. This is an open-access article distributed under the terms of the Creative Commons Attribution 3.0 United States License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.",
"title": ""
},
{
"docid": "0591acdb82c352362de74d6daef10539",
"text": "In this paper we report on our ongoing studies around the application of Augmented Reality methods to support the order picking process of logistics applications. Order picking is the gathering of goods out of a prepared range of items following some customer orders. We named the visual support of this order picking process using Head-mounted Displays “Pick-by-Vision”. This work presents the case study of bringing our previously developed Pickby-Vision system from the lab to an experimental factory hall to evaluate it under more realistic conditions. This includes the execution of two user studies. In the first one we compared our Pickby-Vision system with and without tracking to picking using a paper list to check picking performance and quality in general. In a second test we had subjects using the Pick-by-Vision system continuously for two hours to gain in-depth insight into the longer use of our system, checking user strain besides the general performance. Furthermore, we report on the general obstacles of trying to use HMD-based AR in an industrial setup and discuss our observations of user behaviour.",
"title": ""
},
{
"docid": "d194d474676e5ee3113c588de30496c7",
"text": "While studies of social movements have mostly examined prevalent public discourses, undercurrents' the backstage practices consisting of meaning-making processes, narratives, and situated work-have received less attention. Through a qualitative interview study with sixteen participants, we examine the role of social media in supporting the undercurrents of the Umbrella Movement in Hong Kong. Interviews focused on an intense period of the movement exemplified by sit-in activities inspired by Occupy Wall Street in the USA. Whereas the use of Facebook for public discourse was similar to what has been reported in other studies, we found that an ecology of social media tools such as Facebook, WhatsApp, Telegram, and Google Docs mediated undercurrents that served to ground the public discourse of the movement. We discuss how the undercurrents sustained and developed public discourses in concrete ways.",
"title": ""
},
{
"docid": "9eedeec21ab380c0466ed7edfe7c745d",
"text": "In this paper, we study the effect of using-grams (sequences of words of length n) for text categorization. We use an efficient algorithm for gener ating suchn-gram features in two benchmark domains, the 20 newsgroups data set and 21,578 REU TERS newswire articles. Our results with the rule learning algorithm R IPPER indicate that, after the removal of stop words, word sequences of length 2 or 3 are most useful. Using l o er sequences reduces classification performance.",
"title": ""
},
{
"docid": "3299c32ee123e8c5fb28582e5f3a8455",
"text": "Software defects, commonly known as bugs, present a serious challenge for system reliability and dependability. Once a program failure is observed, the debugging activities to locate the defects are typically nontrivial and time consuming. In this paper, we propose a novel automated approach to pin-point the root-causes of software failures.\n Our proposed approach consists of three steps. The first step is bug prediction, which leverages the existing work on anomaly-based bug detection as exceptional behavior during program execution has been shown to frequently point to the root cause of a software failure. The second step is bug isolation, which eliminates false-positive bug predictions by checking whether the dynamic forward slices of bug predictions lead to the observed program failure. The last step is bug validation, in which the isolated anomalies are validated by dynamically nullifying their effects and observing if the program still fails. The whole bug prediction, isolation and validation process is fully automated and can be implemented with efficient architectural support. Our experiments with 6 programs and 7 bugs, including a real bug in the gcc 2.95.2 compiler, show that our approach is highly effective at isolating only the relevant anomalies. Compared to state-of-art debugging techniques, our proposed approach pinpoints the defect locations more accurately and presents the user with a much smaller code set to analyze.",
"title": ""
},
{
"docid": "7d308c302065253ee1adbffad04ff3f1",
"text": "Cloud computing opens a new era in IT as it can provide various elastic and scalable IT services in a pay-as-you-go fashion, where its users can reduce the huge capital investments in their own IT infrastructure. In this philosophy, users of cloud storage services no longer physically maintain direct control over their data, which makes data security one of the major concerns of using cloud. Existing research work already allows data integrity to be verified without possession of the actual data file. When the verification is done by a trusted third party, this verification process is also called data auditing, and this third party is called an auditor. However, such schemes in existence suffer from several common drawbacks. First, a necessary authorization/authentication process is missing between the auditor and cloud service provider, i.e., anyone can challenge the cloud service provider for a proof of integrity of certain file, which potentially puts the quality of the so-called `auditing-as-a-service' at risk; Second, although some of the recent work based on BLS signature can already support fully dynamic data updates over fixed-size data blocks, they only support updates with fixed-sized blocks as basic unit, which we call coarse-grained updates. As a result, every small update will cause re-computation and updating of the authenticator for an entire file block, which in turn causes higher storage and communication overheads. In this paper, we provide a formal analysis for possible types of fine-grained data updates and propose a scheme that can fully support authorized auditing and fine-grained update requests. Based on our scheme, we also propose an enhancement that can dramatically reduce communication overheads for verifying small updates. Theoretical analysis and experimental results demonstrate that our scheme can offer not only enhanced security and flexibility, but also significantly lower overhead for big data applications with a large number of frequent small updates, such as applications in social media and business transactions.",
"title": ""
},
{
"docid": "cebc36cd572740069ab22e8181c405c4",
"text": "Dealing with high-dimensional input spaces, like visual input, is a challenging task for reinforcement learning (RL). Neuroevolution (NE), used for continuous RL problems, has to either reduce the problem dimensionality by (1) compressing the representation of the neural network controllers or (2) employing a pre-processor (compressor) that transforms the high-dimensional raw inputs into low-dimensional features. In this paper, we are able to evolve extremely small recurrent neural network (RNN) controllers for a task that previously required networks with over a million weights. The high-dimensional visual input, which the controller would normally receive, is first transformed into a compact feature vector through a deep, max-pooling convolutional neural network (MPCNN). Both the MPCNN preprocessor and the RNN controller are evolved successfully to control a car in the TORCS racing simulator using only visual input. This is the first use of deep learning in the context evolutionary RL.",
"title": ""
},
{
"docid": "bd681720305b4dbfca49c3c90ee671be",
"text": "This document describes an extension of the One-Time Password (OTP) algorithm, namely the HMAC-based One-Time Password (HOTP) algorithm, as defined in RFC 4226, to support the time-based moving factor. The HOTP algorithm specifies an event-based OTP algorithm, where the moving factor is an event counter. The present work bases the moving factor on a time value. A time-based variant of the OTP algorithm provides short-lived OTP values, which are desirable for enhanced security. The proposed algorithm can be used across a wide range of network applications, from remote Virtual Private Network (VPN) access and Wi-Fi network logon to transaction-oriented Web applications. The authors believe that a common and shared algorithm will facilitate adoption of two-factor authentication on the Internet by enabling interoperability across commercial and open-source implementations. (IETF). It represents the consensus of the IETF community. It has received public review and has been approved for publication by the Internet Engineering Steering Group (IESG). Not all documents approved by the IESG are a candidate for any level of Internet Standard; see Section 2 of RFC 5741. Information about the current status of this document, any errata, and how to provide feedback on it may be obtained at in effect on the date of publication of this document. Please review these documents carefully, as they describe your rights and restrictions with respect to this document. Code Components extracted from this document must include Simplified BSD License text as described in Section 4.e of the Trust Legal Provisions and are provided without warranty as described in the Simplified BSD License.",
"title": ""
},
{
"docid": "6cabc50fda1107a61c2704c4917b9501",
"text": "A vehicle tracking system is very useful for tracking the movement of a vehicle from any location at any time. In this work, real time Google map and Arduino based vehicle tracking system is implemented with Global Positioning System (GPS) and Global system for mobile communication (GSM) technology. GPS module provides geographic coordinates at regular time intervals. Then the GSM module transmits the location of vehicle to cell phone of owner/user in terms of latitude and longitude. At the same time, location is displayed on LCD. Finally, Google map displays the location and name of the place on cell phone. Thus, owner/user will be able to continuously monitor a moving vehicle using the cell phone. In order to show the feasibility and effectiveness of the system, this work presents experimental result of the vehicle tracking system. The proposed system is user friendly and ensures safety and surveillance at low maintenance cost.",
"title": ""
},
{
"docid": "1324ee90acbdfe27a14a0d86d785341a",
"text": "Though autonomous vehicles are currently operating in several places, many important questions within the field of autonomous vehicle research remain to be addressed satisfactorily. In this paper, we examine the role of communication between pedestrians and autonomous vehicles at unsignalized intersections. The nature of interaction between pedestrians and autonomous vehicles remains mostly in the realm of speculation currently. Of course, pedestrian’s reactions towards autonomous vehicles will gradually change over time owing to habituation, but it is clear that this topic requires urgent and ongoing study, not least of all because engineers require some working model for pedestrian-autonomous-vehicle communication. Our paper proposes a decision-theoretic model that expresses the interaction between a pedestrian and a vehicle. The model considers the interaction between a pedestrian and a vehicle as expressed an MDP, based on prior work conducted by psychologists examining similar experimental conditions. We describe this model and our simulation study of behavior it exhibits. The preliminary results on evaluating the behavior of the autonomous vehicle are promising and we believe it can help reduce the data needed to develop fuller models.",
"title": ""
},
{
"docid": "b6ee2327d8e7de5ede72540a378e69a0",
"text": "Heads of Government from Asia and the Pacific have committed to a malaria-free region by 2030. In 2015, the total number of confirmed cases reported to the World Health Organization by 22 Asia Pacific countries was 2,461,025. However, this was likely a gross underestimate due in part to incidence data not being available from the wide variety of known sources. There is a recognized need for an accurate picture of malaria over time and space to support the goal of elimination. A survey was conducted to gain a deeper understanding of the collection of malaria incidence data for surveillance by National Malaria Control Programmes in 22 countries identified by the Asia Pacific Leaders Malaria Alliance. In 2015–2016, a short questionnaire on malaria surveillance was distributed to 22 country National Malaria Control Programmes (NMCP) in the Asia Pacific. It collected country-specific information about the extent of inclusion of the range of possible sources of malaria incidence data and the role of the private sector in malaria treatment. The findings were used to produce recommendations for the regional heads of government on improving malaria surveillance to inform regional efforts towards malaria elimination. A survey response was received from all 22 target countries. Most of the malaria incidence data collected by NMCPs originated from government health facilities, while many did not collect comprehensive data from mobile and migrant populations, the private sector or the military. All data from village health workers were included by 10/20 countries and some by 5/20. Other sources of data included by some countries were plantations, police and other security forces, sentinel surveillance sites, research or academic institutions, private laboratories and other government ministries. Malaria was treated in private health facilities in 19/21 countries, while anti-malarials were available in private pharmacies in 16/21 and private shops in 6/21. Most countries use primarily paper-based reporting. Most collected malaria incidence data in the Asia Pacific is from government health facilities while data from a wide variety of other known sources are often not included in national surveillance databases. In particular, there needs to be a concerted regional effort to support inclusion of data on mobile and migrant populations and the private sector. There should also be an emphasis on electronic reporting and data harmonization across organizations. This will provide a more accurate and up to date picture of the true burden and distribution of malaria and will be of great assistance in helping realize the goal of malaria elimination in the Asia Pacific by 2030.",
"title": ""
},
{
"docid": "3a95b876619ce4b666278810b80cae77",
"text": "On 14 November 2016, northeastern South Island of New Zealand was struck by a major moment magnitude (Mw) 7.8 earthquake. Field observations, in conjunction with interferometric synthetic aperture radar, Global Positioning System, and seismology data, reveal this to be one of the most complex earthquakes ever recorded. The rupture propagated northward for more than 170 kilometers along both mapped and unmapped faults before continuing offshore at the island’s northeastern extent. Geodetic and field observations reveal surface ruptures along at least 12 major faults, including possible slip along the southern Hikurangi subduction interface; extensive uplift along much of the coastline; and widespread anelastic deformation, including the ~8-meter uplift of a fault-bounded block. This complex earthquake defies many conventional assumptions about the degree to which earthquake ruptures are controlled by fault segmentation and should motivate reevaluation of these issues in seismic hazard models.",
"title": ""
},
{
"docid": "0f6dbf39b8e06a768b3d2b769327168d",
"text": "In this paper, we focus on how to boost the multi-view clustering by exploring the complementary information among multi-view features. A multi-view clustering framework, called Diversity-induced Multi-view Subspace Clustering (DiMSC), is proposed for this task. In our method, we extend the existing subspace clustering into the multi-view domain, and utilize the Hilbert Schmidt Independence Criterion (HSIC) as a diversity term to explore the complementarity of multi-view representations, which could be solved efficiently by using the alternating minimizing optimization. Compared to other multi-view clustering methods, the enhanced complementarity reduces the redundancy between the multi-view representations, and improves the accuracy of the clustering results. Experiments on both image and video face clustering well demonstrate that the proposed method outperforms the state-of-the-art methods.",
"title": ""
},
{
"docid": "5b4045a80ae584050a9057ba32c9296b",
"text": "Electro-rheological (ER) fluids are smart fluids which can transform into solid-like phase by applying an electric field. This process is reversible and can be strategically used to build fluidic components for innovative soft robots capable of soft locomotion. In this work, we show the potential applications of ER fluids to build valves that simplify design of fluidic based soft robots. We propose the design and development of a composite ER valve, aimed at controlling the flexibility of soft robots bodies by controlling the ER fluid flow. We present how an ad hoc number of such soft components can be embodied in a simple crawling soft robot (Wormbot); in a locomotion mechanism capable of forward motion through rotation; and, in a tendon driven continuum arm. All these embodiments show how simplification of the hydraulic circuits relies on the simple structure of ER valves. Finally, we address preliminary experiments to characterize the behavior of Wormbot in terms of actuation forces.",
"title": ""
},
{
"docid": "190bc8482b4bdc8662be25af68adb2c0",
"text": "The goal of all vitreous surgery is to perform the desired intraoperative intervention with minimum collateral damage in the most efficient way possible. An understanding of the principles of fluidics is of importance to all vitreoretinal surgeons to achieve these aims. Advances in technology mean that surgeons are being given increasing choice in the settings they are able to select for surgery. Manufacturers are marketing systems with aspiration driven by peristaltic, Venturi and hybrid pumps. Increasingly fast cut rates are offered with optimised, and in some cases surgeon-controlled, duty cycles. Function-specific cutters are becoming available and narrow-gauge instrumentation is evolving to meet surgeon demands with higher achievable flow rates. In parallel with the developments in outflow technology, infusion systems are advancing with lowering flow resistance and intraocular pressure control to improve fluidic stability during surgery. This review discusses the important aspects of fluidic technology so that surgeons can select the optimum machine parameters to carry out safe and effective surgery.",
"title": ""
},
{
"docid": "43a94e75e054f0245bdfc92c5217ce44",
"text": "Fine-grained image categories recognition is a challenging task aiming at distinguishing objects belonging to the same basic-level category, such as leaf or mushroom. It is a useful technique that can be applied for species recognition, face verification, and etc. Most of the existing methods have difficulties to automatically detect discriminative object components. In this paper, we propose a new fine-grained image categorization model that can be deemed as an improved version spatial pyramid matching (SPM). Instead of the conventional SPM that enumeratively conducts cell-to-cell matching between images, the proposed model combines multiple cells into cellets that are highly responsive to object fine-grained categories. In particular, we describe object components by cellets that connect spatially adjacent cells from the same pyramid level. Straightforwardly, image categorization can be casted as the matching between cellets extracted from pairwise images. Toward an effective matching process, a hierarchical sparse coding algorithm is derived that represents each cellet by a linear combination of the basis cellets. Further, a linear discriminant analysis (LDA)-like scheme is employed to select the cellets with high discrimination. On the basis of the feature vector built from the selected cellets, fine-grained image categorization is conducted by training a linear SVM. Experimental results on the Caltech-UCSD birds, the Leeds butterflies, and the COSMIC insects data sets demonstrate our model outperforms the state-of-the-art. Besides, the visualized cellets show discriminative object parts are localized accurately.",
"title": ""
},
{
"docid": "21aedc605ab5c9ef5416091adc407396",
"text": "This paper presents the basic results for using the parallel coordinate representation as a high dimensional data analysis tool. Several alternatives are reviewed. The basic algorithm for parallel coordinates is laid out and a discussion of its properties as a projective transformation are shown. The several of the duality results are discussed along with their interpretations as data analysis tools. A discussion of permutations of the parallel coordinate axes is given and some examples are given. Some extensions of the parallel coordinate idea are given. The paper closes with a discussion of implementation and some of our experiences are relayed. 1This research was supported by the Air Force Office of Scientific Research under grant number AFOSR-870179, by the Army Research Office under contract number DAAL03-87-K-0087 and by the National Science Foundation under grant number DMS-8701931 . Hyperdimensional Data Analysis Using Parallel Coordinates",
"title": ""
}
] | scidocsrr |
1b77ce3e83e9bfa07c05622e803ebfdf | Mechanical design and basic analysis of a modular robot with special climbing and manipulation functions | [
{
"docid": "7eba5af9ca0beaf8cbac4afb45e85339",
"text": "This paper is concerned with the derivation of the kinematics model of the University of Tehran-Pole Climbing Robot (UT-PCR). As the first step, an appropriate set of coordinates is selected and used to describe the state of the robot. Nonholonomic constraints imposed by the wheels are then expressed as a set of differential equations. By describing these equations in terms of the state of the robot an underactuated driftless nonlinear control system with affine inputs that governs the motion of the robot is derived. A set of experimental results are also given to show the capability of the UT-PCR in climbing a stepped pole.",
"title": ""
}
] | [
{
"docid": "ddb2ba1118e28acf687208bff99ce53a",
"text": "We show that information about social relationships can be used to improve user-level sentiment analysis. The main motivation behind our approach is that users that are somehow \"connected\" may be more likely to hold similar opinions; therefore, relationship information can complement what we can extract about a user's viewpoints from their utterances. Employing Twitter as a source for our experimental data, and working within a semi-supervised framework, we propose models that are induced either from the Twitter follower/followee network or from the network in Twitter formed by users referring to each other using \"@\" mentions. Our transductive learning results reveal that incorporating social-network information can indeed lead to statistically significant sentiment classification improvements over the performance of an approach based on Support Vector Machines having access only to textual features.",
"title": ""
},
{
"docid": "3458fb52eba9aa39896c1d7e3b3dc738",
"text": "The rising popularity of Android and the GUI-driven nature of its apps have motivated the need for applicable automated GUI testing techniques. Although exhaustive testing of all possible combinations is the ideal upper bound in combinatorial testing, it is often infeasible, due to the combinatorial explosion of test cases. This paper presents TrimDroid, a framework for GUI testing of Android apps that uses a novel strategy to generate tests in a combinatorial, yet scalable, fashion. It is backed with automated program analysis and formally rigorous test generation engines. TrimDroid relies on program analysis to extract formal specifications. These specifications express the app's behavior (i.e., control flow between the various app screens) as well as the GUI elements and their dependencies. The dependencies among the GUI elements comprising the app are used to reduce the number of combinations with the help of a solver. Our experiments have corroborated TrimDroid's ability to achieve a comparable coverage as that possible under exhaustive GUI testing using significantly fewer test cases.",
"title": ""
},
{
"docid": "772df08be1a3c3ea0854603727727c63",
"text": "This paper presents a low profile ultrawideband tightly coupled phased array antenna with integrated feedlines. The aperture array consists of planar element pairs with fractal geometry. In each element these pairs are set orthogonal to each other for dual polarisation. The design is an array of closely capacitively coupled pairs of fractal octagonal rings. The adjustment of the capacitive load at the tip end of the elements and the strong mutual coupling between the elements, enables a wideband conformal performance. Adding a ground plane below the array partly compensates for the frequency variation of the array impedance, providing further enhancement in the array bandwidth. Additional improvement is achieved by placing another layer of conductive elements at a defined distance above the radiating elements. A Genetic Algorithm was scripted in MATLAB and combined with the HFSS simulator, providing an easy optimisation tool across the operational bandwidth for the array unit cell design parameters. The proposed antenna shows a wide-scanning ability with a low cross-polarisation level over a wide bandwidth.",
"title": ""
},
{
"docid": "e913a4d2206be999f0278d48caa4708a",
"text": "Widespread deployment of the Internet enabled building of an emerging IT delivery model, i.e., cloud computing. Albeit cloud computing-based services have rapidly developed, their security aspects are still at the initial stage of development. In order to preserve cybersecurity in cloud computing, cybersecurity information that will be exchanged within it needs to be identified and discussed. For this purpose, we propose an ontological approach to cybersecurity in cloud computing. We build an ontology for cybersecurity operational information based on actual cybersecurity operations mainly focused on non-cloud computing. In order to discuss necessary cybersecurity information in cloud computing, we apply the ontology to cloud computing. Through the discussion, we identify essential changes in cloud computing such as data-asset decoupling and clarify the cybersecurity information required by the changes such as data provenance and resource dependency information.",
"title": ""
},
{
"docid": "56205e79e706e05957cb5081d6a8348a",
"text": "Corpus-based set expansion (i.e., finding the “complete” set of entities belonging to the same semantic class, based on a given corpus and a tiny set of seeds) is a critical task in knowledge discovery. It may facilitate numerous downstream applications, such as information extraction, taxonomy induction, question answering, and web search. To discover new entities in an expanded set, previous approaches either make one-time entity ranking based on distributional similarity, or resort to iterative pattern-based bootstrapping. The core challenge for these methods is how to deal with noisy context features derived from free-text corpora, which may lead to entity intrusion and semantic drifting. In this study, we propose a novel framework, SetExpan, which tackles this problem, with two techniques: (1) a context feature selection method that selects clean context features for calculating entity-entity distributional similarity, and (2) a ranking-based unsupervised ensemble method for expanding entity set based on denoised context features. Experiments on three datasets show that SetExpan is robust and outperforms previous state-of-the-art methods in terms of mean average precision.",
"title": ""
},
{
"docid": "0084faef0e08c4025ccb3f8fd50892f1",
"text": "Steganography is a method of hiding secret messages in a cover object while communication takes place between sender and receiver. Security of confidential information has always been a major issue from the past times to the present time. It has always been the interested topic for researchers to develop secure techniques to send data without revealing it to anyone other than the receiver. Therefore from time to time researchers have developed many techniques to fulfill secure transfer of data and steganography is one of them. In this paper we have proposed a new technique of image steganography i.e. Hash-LSB with RSA algorithm for providing more security to data as well as our data hiding method. The proposed technique uses a hash function to generate a pattern for hiding data bits into LSB of RGB pixel values of the cover image. This technique makes sure that the message has been encrypted before hiding it into a cover image. If in any case the cipher text got revealed from the cover image, the intermediate person other than receiver can't access the message as it is in encrypted form.",
"title": ""
},
{
"docid": "0f80933b5302bd6d9595234ff8368ac4",
"text": "We show how a simple convolutional neural network (CNN) can be trained to accurately and robustly regress 6 degrees of freedom (6DoF) 3D head pose, directly from image intensities. We further explain how this FacePoseNet (FPN) can be used to align faces in 2D and 3D as an alternative to explicit facial landmark detection for these tasks. We claim that in many cases the standard means of measuring landmark detector accuracy can be misleading when comparing different face alignments. Instead, we compare our FPN with existing methods by evaluating how they affect face recognition accuracy on the IJB-A and IJB-B benchmarks: using the same recognition pipeline, but varying the face alignment method. Our results show that (a) better landmark detection accuracy measured on the 300W benchmark does not necessarily imply better face recognition accuracy. (b) Our FPN provides superior 2D and 3D face alignment on both benchmarks. Finally, (c), FPN aligns faces at a small fraction of the computational cost of comparably accurate landmark detectors. For many purposes, FPN is thus a far faster and far more accurate face alignment method than using facial landmark detectors.",
"title": ""
},
{
"docid": "b012b434060ccc2c4e8c67d42e43728a",
"text": "With rapid development, wireless sensor networks (WSNs) have been focused on improving the performance consist of energy efficiency, communication effectiveness, and system throughput. Many novel mechanisms have been implemented by adapting the social behaviors of natural creatures, such as bats, birds, ants, fish and honeybees. These systems are known as nature inspired systems or swarm intelligence in in order to provide optimization strategies, handle large-scale networks and avoid resource constraints. Spider monkey optimization (SMO) is a recent addition to the family of swarm intelligence algorithms by structuring the social foraging behavior of spider monkeys. In this paper, we aim to study the mechanism of SMO in the field of WSNs, formulating the mathematical model of the behavior patterns which cluster-based Spider Monkey Optimization (SMO-C) approach is adapted. In addition, our proposed methodology based on the Spider Monkey's behavioral structure aims to improve the traditional routing protocols in term of low-energy consumption and system quality of the network.",
"title": ""
},
{
"docid": "71ca5a461ff8eb6fc33c1a272c4acfac",
"text": "We introduce a tree manipulation language, Fast, that overcomes technical limitations of previous tree manipulation languages, such as XPath and XSLT which do not support precise program analysis, or TTT and Tiburon which only support trees over finite alphabets. At the heart of Fast is a combination of SMT solvers and tree transducers, enabling it to model programs whose input and output can range over any decidable theory. The language can express multiple applications. We write an HTML “sanitizer” in Fast and obtain results comparable to leading libraries but with smaller code. Next we show how augmented reality “tagging” applications can be checked for potential overlap in milliseconds using Fast type checking. We show how transducer composition enables deforestation for improved performance. Overall, we strike a balance between expressiveness and precise analysis that works for a large class of important tree-manipulating programs.",
"title": ""
},
{
"docid": "26ead0555a416c62a2153f29c5d95c25",
"text": "BACKGROUND\nAgricultural systems are amended ecosystems with a variety of properties. Modern agroecosystems have tended towards high through-flow systems, with energy supplied by fossil fuels directed out of the system (either deliberately for harvests or accidentally through side effects). In the coming decades, resource constraints over water, soil, biodiversity and land will affect agricultural systems. Sustainable agroecosystems are those tending to have a positive impact on natural, social and human capital, while unsustainable systems feed back to deplete these assets, leaving fewer for the future. Sustainable intensification (SI) is defined as a process or system where agricultural yields are increased without adverse environmental impact and without the conversion of additional non-agricultural land. The concept does not articulate or privilege any particular vision or method of agricultural production. Rather, it emphasizes ends rather than means, and does not pre-determine technologies, species mix or particular design components. The combination of the terms 'sustainable' and 'intensification' is an attempt to indicate that desirable outcomes around both more food and improved environmental goods and services could be achieved by a variety of means. Nonetheless, it remains controversial to some.\n\n\nSCOPE AND CONCLUSIONS\nThis review analyses recent evidence of the impacts of SI in both developing and industrialized countries, and demonstrates that both yield and natural capital dividends can occur. The review begins with analysis of the emergence of combined agricultural-environmental systems, the environmental and social outcomes of recent agricultural revolutions, and analyses the challenges for food production this century as populations grow and consumption patterns change. Emergent criticisms are highlighted, and the positive impacts of SI on food outputs and renewable capital assets detailed. It concludes with observations on policies and incentives necessary for the wider adoption of SI, and indicates how SI could both promote transitions towards greener economies as well as benefit from progress in other sectors.",
"title": ""
},
{
"docid": "be9cb16913cabce783a16998fb5023b7",
"text": "Unlike conventional hydro and tidal barrage installations, water current turbines in open flow can generate power from flowing water with almost zero environmental impact, over a much wider range of sites than those available for conventional tidal power generation. Recent developments in current turbine design are reviewed and some potential advantages of ducted or “diffuser-augmented” current turbines are explored. These include improved safety, protection from weed growth, increased power output and reduced turbine and gearbox size for a given power output. Ducted turbines are not subject to the so-called Betz limit, which defines an upper limit of 59.3% of the incident kinetic energy that can be converted to shaft power by a single actuator disk turbine in open flow. For ducted turbines the theoretical limit depends on (i) the pressure difference that can be created between duct inlet and outlet, and (ii) the volumetric flow through the duct. These factors in turn depend on the shape of the duct and the ratio of duct area to turbine area. Previous investigations by others have found a theoretical limit for a diffuser-augmented wind turbine of about 3.3 times the Betz limit, and a model diffuseraugmented wind turbine has extracted 4.25 times the power extracted by the same turbine without a diffuser. In the present study, similar principles applied to a water turbine have so far achieved an augmentation factor of 3 at an early stage of the investigation.",
"title": ""
},
{
"docid": "102a9eb7ba9f65a52c6983d74120430e",
"text": "A key aim of social psychology is to understand the psychological processes through which independent variables affect dependent variables in the social domain. This objective has given rise to statistical methods for mediation analysis. In mediation analysis, the significance of the relationship between the independent and dependent variables has been integral in theory testing, being used as a basis to determine (1) whether to proceed with analyses of mediation and (2) whether one or several proposed mediator(s) fully or partially accounts for an effect. Synthesizing past research and offering new arguments, we suggest that the collective evidence raises considerable concern that the focus on the significance between the independent and dependent variables, both before and after mediation tests, is unjustified and can impair theory development and testing. To expand theory involving social psychological processes, we argue that attention in mediation analysis should be shifted towards assessing the magnitude and significance of indirect effects. Understanding the psychological processes by which independent variables affect dependent variables in the social domain has long been of interest to social psychologists. Although moderation approaches can test competing psychological mechanisms (e.g., Petty, 2006; Spencer, Zanna, & Fong, 2005), mediation is typically the standard for testing theories regarding process (e.g., Baron & Kenny, 1986; James & Brett, 1984; Judd & Kenny, 1981; MacKinnon, 2008; MacKinnon, Lockwood, Hoffman, West, & Sheets, 2002; Muller, Judd, & Yzerbyt, 2005; Preacher & Hayes, 2004; Preacher, Rucker, & Hayes, 2007; Shrout & Bolger, 2002). For example, dual process models of persuasion (e.g., Petty & Cacioppo, 1986) often distinguish among competing accounts by measuring the postulated underlying process (e.g., thought favorability, thought confidence) and examining their viability as mediators (Tormala, Briñol, & Petty, 2007). Thus, deciding on appropriate requirements for mediation is vital to theory development. Supporting the high status of mediation analysis in our field, MacKinnon, Fairchild, and Fritz (2007) report that research in social psychology accounts for 34% of all mediation tests in psychology more generally. In our own analysis of journal articles published from 2005 to 2009, we found that approximately 59% of articles in the Journal of Personality and Social Psychology (JPSP) and 65% of articles in Personality and Social Psychology Bulletin (PSPB) included at least one mediation test. Consistent with the observations of MacKinnon et al., we found that the bulk of these analyses continue to follow the causal steps approach outlined by Baron and Kenny (1986). Social and Personality Psychology Compass 5/6 (2011): 359–371, 10.1111/j.1751-9004.2011.00355.x a 2011 The Authors Social and Personality Psychology Compass a 2011 Blackwell Publishing Ltd The current article examines the viability of the causal steps approach in which the significance of the relationship between an independent variable (X) and a dependent variable (Y) are tested both before and after controlling for a mediator (M) in order to examine the validity of a theory specifying mediation. Traditionally, the X fi Y relationship is tested prior to mediation to determine whether there is an effect to mediate, and it is also tested after introducing a potential mediator to determine whether that mediator fully or partially accounts for the effect. At first glance, the requirement of a significant X fi Y association prior to examining mediation seems reasonable. If there is no significant X fi Y relationship, how can there be any mediation of it? Furthermore, the requirement that X fi Y become nonsignificant when controlling for the mediator seems sensible in order to claim ‘full mediation’. What is the point of hypothesizing or testing for additional mediators if the inclusion of one mediator renders the initial relationship indistinguishable from zero? Despite the intuitive appeal of these requirements, the present article raises serious concerns about their use.",
"title": ""
},
{
"docid": "5e8f2e9d799b865bb16bd3a68003db73",
"text": "A robust road markings detection algorithm is a fundamental component of intelligent vehicles' autonomous navigation in urban environment. This paper presents an algorithm for detecting road markings including zebra crossings, stop lines and lane markings to provide road information for intelligent vehicles. First, to eliminate the impact of the perspective effect, an Inverse Perspective Mapping (IPM) transformation is applied to the images grabbed by the camera; the region of interest (ROI) was extracted from IPM image by a low level processing. Then, different algorithms are adopted to extract zebra crossings, stop lines and lane markings. The experiments on a large number of street scenes in different conditions demonstrate the effectiveness of the proposed algorithm.",
"title": ""
},
{
"docid": "58c238443e7fbe7043cfa4c67b28dbb2",
"text": "In the fall of 2013, we offered an open online Introduction to Recommender Systems through Coursera, while simultaneously offering a for-credit version of the course on-campus using the Coursera platform and a flipped classroom instruction model. As the goal of offering this course was to experiment with this type of instruction, we performed extensive evaluation including surveys of demographics, self-assessed skills, and learning intent; we also designed a knowledge-assessment tool specifically for the subject matter in this course, administering it before and after the course to measure learning, and again 5 months later to measure retention. We also tracked students through the course, including separating out students enrolled for credit from those enrolled only for the free, open course.\n Students had significant knowledge gains across all levels of prior knowledge and across all demographic categories. The main predictor of knowledge gain was effort expended in the course. Students also had significant knowledge retention after the course. Both of these results are limited to the sample of students who chose to complete our knowledge tests. Student completion of the course was hard to predict, with few factors contributing predictive power; the main predictor of completion was intent to complete. Students who chose a concepts-only track with hand exercises achieved the same level of knowledge of recommender systems concepts as those who chose a programming track and its added assignments, though the programming students gained additional programming knowledge. Based on the limited data we were able to gather, face-to-face students performed as well as the online-only students or better; they preferred this format to traditional lecture for reasons ranging from pure convenience to the desire to watch videos at a different pace (slower for English language learners; faster for some native English speakers). This article also includes our qualitative observations, lessons learned, and future directions.",
"title": ""
},
{
"docid": "f69b170e9ccd7f04cbc526373b0ad8ee",
"text": "meaning (overall M = 5.89) and significantly higher than with any of the other three abstract meanings (overall M = 2.05, all ps < .001). Procedure. Under a cover story of studying advertising slogans, participants saw one of the 22 target brands and thought about its abstract concept in memory. They were then presented, on a single screen, with four alternative slogans (in random order) for the target brand and were asked to rank the slogans, from 1 (“best”) to 4 (“worst”), in terms of how well the slogan fits the image of the target brand. Each slogan was intended to distinctively communicate the abstract meaning associated with one of the four high-levelmeaning associated with one of the four high-level brand value dimensions uncovered in the pilot study. After a series of filler tasks, participants indicated their attitude toward the brand on a seven-point scale (1 = “very unfavorable,” and 7 = “very favorable”). Ranking of the slogans. We conducted separate nonparametric Kruskal-Wallis tests on each country’s data to evaluate differences in the rank order for each of the four slogans among the four types of brand concepts. In all countries, the tests were significant (the United States: all 2(3, N = 539) ≥ 145.4, all ps < .001; China: all 2(3, N = 208) ≥ 52.8, all ps < .001; Canada: all 2(3, N = 380) ≥ 33.3, all ps < .001; Turkey: all 2(3, N = 380) ≥ 51.0, all ps < .001). We pooled the data from the four countries and conducted follow-up tests to evaluate pairwise differences in the rank order of each slogan among the four brand concepts, controlling for Type I error across tests using the Bonferroni approach. The results of these tests indicated that each slogan was ranked at the top in terms of favorability when it matched the brand concept (self-enhancement brand concept: Mself-enhancement slogan = 1.77; openness brand FIGURE 2 Structural Relations Among Value Dimensions from Multidimensional Scaling (Pilot: Study 1) b = benevolence, t = tradition, c = conformity, sec = security S e l f E n h a n c e m e n t IN D VID U A L C O N C ER N S C O LL EC TI VE C O N C ER N S",
"title": ""
},
{
"docid": "badb04b676d3dab31024e8033fc8aec4",
"text": "Review was undertaken from February 1969 to January 1998 at the State forensic science center (Forensic Science) in Adelaide, South Australia, of all cases of murder-suicide involving children <16 years of age. A total of 13 separate cases were identified involving 30 victims, all of whom were related to the perpetrators. There were 7 male and 6 female perpetrators (age range, 23-41 years; average, 31 years) consisting of 6 mothers, 6 father/husbands, and 1 uncle/son-in-law. The 30 victims consisted of 11 daughters, 11 sons, 1 niece, 1 mother-in-law, and 6 wives of the assailants. The 23 children were aged from 10 months to 15 years (average, 6.0 years). The 6 mothers murdered 9 children and no spouses, with 3 child survivors. The 6 fathers murdered 13 children and 6 wives, with 1 child survivor. This study has demonstrated a higher percentage of female perpetrators than other studies of murder-suicide. The methods of homicide and suicide used were generally less violent among the female perpetrators compared with male perpetrators. Fathers killed not only their children but also their wives, whereas mothers murdered only their children. These results suggest differences between murder-suicides that involve children and adult-only cases, and between cases in which the mother rather than the father is the perpetrator.",
"title": ""
},
{
"docid": "e222cbd0d62e4a323feb7c57bc3ff7a3",
"text": "Facebook and other social media have been hailed as delivering the promise of new, socially engaged educational experiences for students in undergraduate, self-directed, and other educational sectors. A theoretical and historical analysis of these media in the light of earlier media transformations, however, helps to situate and qualify this promise. Specifically, the analysis of dominant social media presented here questions whether social media platforms satisfy a crucial component of learning – fostering the capacity for debate and disagreement. By using the analytical frame of media theorist Raymond Williams, with its emphasis on the influence of advertising in the content and form of television, we weigh the conditions of dominant social networking sites as constraints for debate and therefore learning. Accordingly, we propose an update to Williams’ erudite work that is in keeping with our findings. Williams’ critique focuses on the structural characteristics of sequence, rhythm, and flow of television as a cultural form. Our critique proposes the terms information design, architecture, and above all algorithm, as structural characteristics that similarly apply to the related but contemporary cultural form of social networking services. Illustrating the ongoing salience of media theory and history for research in e-learning, the article updates Williams’ work while leveraging it in a critical discussion of the suitability of commercial social media for education.",
"title": ""
},
{
"docid": "3fae9d0778c9f9df1ae51ad3b5f62a05",
"text": "This paper argues for the utility of back-end driven onloading to the edge as a way to address bandwidth use and latency challenges for future device-cloud interactions. Supporting such edge functions (EFs) requires solutions that can provide (i) fast and scalable EF provisioning and (ii) strong guarantees for the integrity of the EF execution and confidentiality of the state stored at the edge. In response to these goals, we (i) present a detailed design space exploration of the current technologies that can be leveraged in the design of edge function platforms (EFPs), (ii) develop a solution to address security concerns of EFs that leverages emerging hardware support for OS agnostic trusted execution environments such as Intel SGX enclaves, and (iii) propose and evaluate AirBox, a platform for fast, scalable and secure onloading of edge functions.",
"title": ""
},
{
"docid": "25d25da610b4b3fe54b665d55afc3323",
"text": "We address the problem of vision-based navigation in busy inner-city locations, using a stereo rig mounted on a mobile platform. In this scenario semantic information becomes important: rather than modelling moving objects as arbitrary obstacles, they should be categorised and tracked in order to predict their future behaviour. To this end, we combine classical geometric world mapping with object category detection and tracking. Object-category specific detectors serve to find instances of the most important object classes (in our case pedestrians and cars). Based on these detections, multi-object tracking recovers the objects’ trajectories, thereby making it possible to predict their future locations, and to employ dynamic path planning. The approach is evaluated on challenging, realistic video sequences recorded at busy inner-city locations.",
"title": ""
}
] | scidocsrr |
a49cedfb08b746c108e496b0c9f8fa5e | An Ensemble Approach for Incremental Learning in Nonstationary Environments | [
{
"docid": "101af2d0539fa1470e8acfcf7c728891",
"text": "OnlineEnsembleLearning",
"title": ""
},
{
"docid": "fc5782aa3152ca914c6ca5cf1aef84eb",
"text": "We introduce Learn++, an algorithm for incremental training of neural network (NN) pattern classifiers. The proposed algorithm enables supervised NN paradigms, such as the multilayer perceptron (MLP), to accommodate new data, including examples that correspond to previously unseen classes. Furthermore, the algorithm does not require access to previously used data during subsequent incremental learning sessions, yet at the same time, it does not forget previously acquired knowledge. Learn++ utilizes ensemble of classifiers by generating multiple hypotheses using training data sampled according to carefully tailored distributions. The outputs of the resulting classifiers are combined using a weighted majority voting procedure. We present simulation results on several benchmark datasets as well as a real-world classification task. Initial results indicate that the proposed algorithm works rather well in practice. A theoretical upper bound on the error of the classifiers constructed by Learn++ is also provided.",
"title": ""
}
] | [
{
"docid": "8f2be7a7f6b5f5ba1412e8635a6aa755",
"text": "In this paper, we propose to infer music genre embeddings from audio datasets carrying semantic information about genres. We show that such embeddings can be used for disambiguating genre tags (identification of different labels for the same genre, tag translation from a tag system to another, inference of hierarchical taxonomies on these genre tags). These embeddings are built by training a deep convolutional neural network genre classifier with large audio datasets annotated with a flat tag system. We show empirically that they makes it possible to retrieve the original taxonomy of a tag system, spot duplicates tags and translate tags from a tag system to another.",
"title": ""
},
{
"docid": "16a30db315374b42d721a91bb5549763",
"text": "The display units integrated in todays head-mounted displays (HMDs) provide only a limited field of view (FOV) to the virtual world. In order to present an undistorted view to the virtual environment (VE), the perspective projection used to render the VE has to be adjusted to the limitations caused by the HMD characteristics. In particular, the geometric field of view (GFOV), which defines the virtual aperture angle used for rendering of the 3D scene, is set up according to the display's field of view. A discrepancy between these two fields of view distorts the geometry of the VE in a way that either minifies or magnifies the imagery displayed to the user. Discrepancies between the geometric and physical FOV causes the imagery to be minified or magnified. This distortion has the potential to negatively or positively affect a user's perception of the virtual space, sense of presence, and performance on visual search tasks.\n In this paper we analyze if a user is consciously aware of perspective distortions of the VE displayed in the HMD. We introduce a psychophysical calibration method to determine the HMD's actual field of view, which may vary from the nominal values specified by the manufacturer. Furthermore, we conducted an experiment to identify perspective projections for HMDs which are identified as natural by subjects---even if these perspectives deviate from the perspectives that are inherently defined by the display's field of view. We found that subjects evaluate a field of view as natural when it is larger than the actual field of view of the HMD---in some cases up to 50%.",
"title": ""
},
{
"docid": "325d6c44ef7f4d4e642e882a56f439b7",
"text": "In announcing the news that “post-truth” is the Oxford Dictionaries’ 2016 word of the year, the Chicago Tribune declared that “Truth is dead. Facts are passé.”1 Politicians have shoveled this mantra our direction for centuries, but during this past presidential election, they really rubbed our collective faces in it. To be fair, the word “post” isn’t to be taken to mean “after,” as in its normal sense, but rather as “irrelevant.” Careful observers of the recent US political campaigns came to appreciate this difference. Candidates spewed streams of rhetorical effluent that didn’t even pretend to pass the most perfunctory fact-checking smell test. As the Tribune noted, far too many voters either didn’t notice or didn’t care. That said, recognizing an unwelcome phenomenon isn’t the same as legitimizing it, and now the Oxford Dictionaries group has gone too far toward the latter. They say “post-truth” captures the “ethos, mood or preoccupations of [2016] to have lasting potential as a word of cultural significance.”1 I emphatically disagree. I don’t know what post-truth did capture, but it didn’t capture that. We need a phrase for the 2016 mood that’s a better fit. I propose the term “gaudy facts,” for it emphasizes the garish and tawdry nature of the recent political dialog. Further, “gaudy facts” has the advantage of avoiding the word truth altogether, since there’s precious little of that in political discourse anyway. I think our new term best captures the ethos and mood of today’s political delusionists. There’s no ground truth data in sight, all claims are imaginary and unsupported without pretense of facts, and distortion is reality. This seems to fit our present experience well. The only tangible remnant of reality that isn’t subsumed under our new term is the speakers’ underlying narcissism, but at least we’re closer than we were with “post-truth.” We need to forever banish the association of the word “truth” with “politics”—these two terms just don’t play well with each other. Lies, Damn Lies, and Fake News",
"title": ""
},
{
"docid": "55658c75bcc3a12c1b3f276050f28355",
"text": "Sensing systems such as biomedical implants, infrastructure monitoring systems, and military surveillance units are constrained to consume only picowatts to nanowatts in standby and active mode, respectively. This tight power budget places ultra-low power demands on all building blocks in the systems. This work proposes a voltage reference for use in such ultra-low power systems, referred to as the 2T voltage reference, which has been demonstrated in silicon across three CMOS technologies. Prototype chips in 0.13 μm show a temperature coefficient of 16.9 ppm/°C (best) and line sensitivity of 0.033%/V, while consuming 2.22 pW in 1350 μm2. The lowest functional Vdd 0.5 V. The proposed design improves energy efficiency by 2 to 3 orders of magnitude while exhibiting better line sensitivity and temperature coefficient in less area, compared to other nanowatt voltage references. For process spread analysis, 49 dies are measured across two runs, showing the design exhibits comparable spreads in TC and output voltage to existing voltage references in the literature. Digital trimming is demonstrated, and assisted one temperature point digital trimming, guided by initial samples with two temperature point trimming, enables TC <; 50 ppm/°C and ±0.35% output precision across all 25 dies. Ease of technology portability is demonstrated with silicon measurement results in 65 nm, 0.13 μm, and 0.18 μm CMOS technologies.",
"title": ""
},
{
"docid": "4edb9dea1e949148598279c0111c4531",
"text": "This paper presents a design of highly effective triple band microstrip antenna for wireless communication applications. The triple band design is a metamaterial-based design for WLAN and WiMAX (2.4/3.5/5.6 GHz) applications. The triple band response is obtained by etching two circular and one rectangular split ring resonator (SRR) unit cells on the ground plane of a conventional patch operating at 3.56 GHz. The circular cells are introduced to resonate at 5.3 GHz for the upper WiMAX band, while the rectangular cell is designed to resonate at 2.45 GHz for the lower WLAN band. Furthermore, a novel complementary H-shaped unit cell oriented above the triple band antenna is proposed. The proposed H-shaped is being used as a lens to significantly increase the antenna gain. To investigate the left-handed behavior of the proposed H-shaped, extensive parametric study for the placement of each unit cell including the metamaterial lens, which is the main parameter affecting the antenna performance, is presented and discussed comprehensively. Good consistency between the measured and simulated results is achieved. The proposed antenna meets the requirements of WiMAX and WLAN standards with high peak realized gain.",
"title": ""
},
{
"docid": "c93c690ecb038a87c351d9674f0a881a",
"text": "Foot-operated computer interfaces have been studied since the inception of human--computer interaction. Thanks to the miniaturisation and decreasing cost of sensing technology, there is an increasing interest exploring this alternative input modality, but no comprehensive overview of its research landscape. In this survey, we review the literature on interfaces operated by the lower limbs. We investigate the characteristics of users and how they affect the design of such interfaces. Next, we describe and analyse foot-based research prototypes and commercial systems in how they capture input and provide feedback. We then analyse the interactions between users and systems from the perspective of the actions performed in these interactions. Finally, we discuss our findings and use them to identify open questions and directions for future research.",
"title": ""
},
{
"docid": "3ec603c63166167c88dc6d578a7c652f",
"text": "Peer-to-peer (P2P) lending or crowdlending, is a recent innovation allows a group of individual or institutional lenders to lend funds to individuals or businesses in return for interest payment on top of capital repayments. The rapid growth of P2P lending marketplaces has heightened the need to develop a support system to help lenders make sound lending decisions. But realizing such system is challenging in the absence of formal credit data used by the banking sector. In this paper, we attempt to explore the possible connections between user credit risk and how users behave in the lending sites. We present the first analysis of user detailed clickstream data from a large P2P lending provider. Our analysis reveals that the users’ sequences of repayment histories and financial activities in the lending site, have significant predictive value for their future loan repayments. In the light of this, we propose a deep architecture named DeepCredit, to automatically acquire the knowledge of credit risk from the sequences of activities that users conduct on the site. Experiments on our large-scale real-world dataset show that our model generates a high accuracy in predicting both loan delinquency and default, and significantly outperforms a number of baselines and competitive alternatives.",
"title": ""
},
{
"docid": "5dba3258382d9781287cdcb6b227153c",
"text": "Mobile sensing systems employ various sensors in smartphones to extract human-related information. As the demand for sensing systems increases, a more effective mechanism is required to sense information about human life. In this paper, we present a systematic study on the feasibility and gaining properties of a crowdsensing system that primarily concerns sensing WiFi packets in the air. We propose that this method is effective for estimating urban mobility by using only a small number of participants. During a seven-week deployment, we collected smartphone sensor data, including approximately four million WiFi packets from more than 130,000 unique devices in a city. Our analysis of this dataset examines core issues in urban mobility monitoring, including feasibility, spatio-temporal coverage, scalability, and threats to privacy. Collectively, our findings provide valuable insights to guide the development of new mobile sensing systems for urban life monitoring.",
"title": ""
},
{
"docid": "bfc8a36a8b3f1d74bad5f2e25ad3aae5",
"text": "This paper presents a novel ac-dc power factor correction (PFC) power conversion architecture for a single-phase grid interface. The proposed architecture has significant advantages for achieving high efficiency, good power factor, and converter miniaturization, especially in low-to-medium power applications. The architecture enables twice-line-frequency energy to be buffered at high voltage with a large voltage swing, enabling reduction in the energy buffer capacitor size and the elimination of electrolytic capacitors. While this architecture can be beneficial with a variety of converter topologies, it is especially suited for the system miniaturization by enabling designs that operate at high frequency (HF, 3-30 MHz). Moreover, we introduce circuit implementations that provide efficient operation in this range. The proposed approach is demonstrated for an LED driver converter operating at a (variable) HF switching frequency (3-10 MHz) from 120 Vac, and supplying a 35 Vdc output at up to 30 W. The prototype converter achieves high efficiency (92%) and power factor (0.89), and maintains a good performance over a wide load range. Owing to the architecture and HF operation, the prototype achieves a high “box” power density of 50 W/in3 (“displacement” power density of 130 W/in3), with miniaturized inductors, ceramic energy buffer capacitors, and a small-volume EMI filter.",
"title": ""
},
{
"docid": "fe5377214840549fbbb6ad520592191d",
"text": "The ability to exert an appropriate amount of force on brain tissue during surgery is an important component of instrument handling. It allows surgeons to achieve the surgical objective effectively while maintaining a safe level of force in tool-tissue interaction. At the present time, this knowledge, and hence skill, is acquired through experience and is qualitatively conveyed from an expert surgeon to trainees. These forces can be assessed quantitatively by retrofitting surgical tools with sensors, thus providing a mechanism for improved performance and safety of surgery, and enhanced surgical training. This paper presents the development of a force-sensing bipolar forceps, with installation of a sensory system, that is able to measure and record interaction forces between the forceps tips and brain tissue in real time. This research is an extension of a previous research where a bipolar forceps was instrumented to measure dissection and coagulation forces applied in a single direction. Here, a planar forceps with two sets of strain gauges in two orthogonal directions was developed to enable measuring the forces with a higher accuracy. Implementation of two strain gauges allowed compensation of strain values due to deformations of the forceps in other directions (axial stiffening) and provided more accurate forces during microsurgery. An experienced neurosurgeon performed five neurosurgical tasks using the axial setup and repeated the same tasks using the planar device. The experiments were performed on cadaveric brains. Both setups were shown to be capable of measuring real-time interaction forces. Comparing the two setups, under the same experimental condition, indicated that the peak and mean forces quantified by planar forceps were at least 7% and 10% less than those of axial tool, respectively; therefore, utilizing readings of all strain gauges in planar forceps provides more accurate values of both peak and mean forces than axial forceps. Cross-correlation analysis between the two force signals obtained, one from each cadaveric practice, showed a high similarity between the two force signals.",
"title": ""
},
{
"docid": "a6d4b6a0cd71a8e64c9a2429b95cd7da",
"text": "Creativity research has traditionally focused on human creativity, and even more specifically, on the psychology of individual creative people. In contrast, computational creativity research involves the development and evaluation of creativity in a computational system. As we study the effect of scaling up from the creativity of a computational system and individual people to large numbers of diverse computational agents and people, we have a new perspective: creativity can ascribed to a computational agent, an individual person, collectives of people and agents and/or their interaction. By asking “Who is being creative?” this paper examines the source of creativity in computational and collective creativity. A framework based on ideation and interaction provides a way of characterizing existing research in computational and collective creativity and identifying directions for future research. Human and Computational Creativity Creativity is a topic of philosophical and scientific study considering the scenarios and human characteristics that facilitate creativity as well as the properties of computational systems that exhibit creative behavior. “The four Ps of creativity”, as introduced in Rhodes (1987) and more recently summarized by Runco (2011), decompose the complexity of creativity into separate but related influences: • Person: characteristics of the individual, • Product: an outcome focus on ideas, • Press: the environmental and contextual factors, • Process: cognitive process and thinking techniques. While the four Ps are presented in the context of the psychology of human creativity, they can be modified for computational creativity if process includes a computational process. The study of human creativity has a focus on the characteristics and cognitive behavior of creative people and the environments in which creativity is facilitated. The study of computational creativity, while inspired by concepts of human creativity, is often expressed in the formal language of search spaces and algorithms. Why do we ask who is being creative? Firstly, there is an increasing interest in understanding computational systems that can formalize or model creative processes and therefore exhibit creative behaviors or acts. Yet there are still skeptics that claim computers aren’t creative, the computer is just following instructions. Second and in contrast, there is increasing interest in computational systems that encourage and enhance human creativity that make no claims about whether the computer is being or could be creative. Finally, as we develop more capable socially intelligent computational systems and systems that enable collective intelligence among humans and computers, the boundary between human creativity and computer creativity blurs. As the boundary blurs, we need to develop ways of recognizing creativity that makes no assumptions about whether the creative entity is a person, a computer, a potentially large group of people, or the collective intelligence of human and computational entities. This paper presents a framework that characterizes the source of creativity from two perspectives, ideation and interaction, as a guide to current and future research in computational and collective creativity. Creativity: Process and Product Understanding the nature of creativity as process and product is critical in computational creativity if we want to avoid any bias that only humans are creative and computers are not. While process and product in creativity are tightly coupled in practice, a distinction between the two provides two ways of recognizing computational creativity by describing the characteristics of a creative process and separately, the characteristics of a creative product. Studying and describing the processes that generate creative products focus on the cognitive behavior of a creative person or the properties of a computational system, and describing ways of recognizing a creative product focus on the characteristics of the result of a creative process. When describing creative processes there is an assumption that there is a space of possibilities. Boden (2003) refers to this as conceptual spaces and describes these spaces as structured styles of thought. In computational systems such a space is called a state space. How such spaces are changed, or the relationship between the set of known products, the space of possibilities, and the potentially creative product, is the basis for describing processes that can generate potentially creative artifacts. There are many accounts of the processes for generating creative products. Two sources are described here: Boden (2003) from the philosophical and artificial intelligence perspective and Gero (2000) from the design science perspective. Boden (2003) describes three ways in which creative products can be generated: combination, exploration, International Conference on Computational Creativity 2012 67 and transformation: each one describes the way in which the conceptual space of known products provides a basis for generating a creative product and how the conceptual space changes as a result of the creative artifact. Combination brings together two or more concepts in ways that hasn’t occurred in existing products. Exploration finds concepts in parts of the space that have not been considered in existing products. Transformation modifies concepts in the space to generate products that change the boundaries of the space. Gero (2000) describes computational processes for creative design as combination, transformation, analogy, emergence, and first principles. Combination and transformation are similar to Boden’s processes. Analogy transfers concepts from a source product that may be in a different conceptual space to a target product to generate a novel product in the target’s space. Emergence is a process that finds new underlying structures in a concept that give rise to a new product, effectively a re-representation process. First principles as a process generates new products without relying on concepts as defined in existing products. While these processes provide insight into the nature of creativity and provide a basis for computational creativity, they have little to say about how we recognize a creative product. As we move towards computational systems that enhance or contribute to human creativity, the articulation of process models for generating creative artifacts does not provide an evaluation of the product. Computational systems that generate creative products need evaluation criteria that are independent of the process by which the product was generated. There are also numerous approaches to defining characteristics of creative products as the basis for evaluating or assessing creativity. Boden (2003) claims that novelty and value are the essential criteria and that other aspects, such as surprise, are kinds of novelty or value. Wiggins (2006) often uses value to indicate all valuable aspects of a creative products, yet provides definitions for novelty and value as different features that are relevant to creativity. Oman and Tumer (2009) combine novelty and quality to evaluate individual ideas in engineering design as a relative measure of creativity. Shah, Smith, and Vargas-Hernandez (2003) associate creative design with ideation and develop metrics for novelty, variety, quality, and quantity of ideas. Wiggins (2006) argues that surprise is a property of the receiver of a creative artifact, that is, it is an emotional response. Cropley and Cropley (2005) propose four broad properties of products that can be used to describe the level and kind of creativity they possess: effectiveness, novelty, elegance, genesis. Besemer and O'Quin (1987) describe a Creative Product Semantic Scale which defines the creativity of products in three dimensions: novelty (the product is original, surprising and germinal), resolution (the product is valuable, logical, useful, and understandable), and elaboration and synthesis (the product is organic, elegant, complex, and well-crafted). Horn and Salvendy (2006) after doing an analysis of many properties of creative products, report on consumer perception of creativity in three critical perceptions: affect (our emotional response to the product), importance, and novelty. Goldenberg and Mazursky (2002) report on research that has found the observable characteristics of creativity in products to include \"original, of value, novel, interesting, elegant, unique, surprising.\" Amabile (1982) says it most clearly when she summarizes the social psychology literature on the assessment of creativity: While most definitions of creativity refer to novelty, appropriateness, and surprise, current creativity tests or assessment techniques are not closely linked to these criteria. She further argues that “There is no clear, explicit statement of the criteria that conceptually underlie the assessment procedures.” In response to an inability to establish and define criteria for evaluating creativity that is acceptable to all domains, Amabile (1982, 1996) introduced a Consensual Assessment Technique (CAT) in which creativity is assessed by a group of judges that are knowledgeable of the field. Since then, several scales for assisting human evaluators have been developed to guide human evaluators, for example, Besemer and O'Quin's (1999) Creative Product Semantic Scale, Reis and Renzulli's (1991) Student Product Assessment Form, and Cropley et al’s (2011) Creative Solution Diagnosis Scale. Maher (2010) presents an AI approach to evaluating creativity of a product by measuring novelty, value and surprise that provides a formal model for evaluating creative products. Novelty is a measure of how different the product is from existing products and is measured as a distance from clusters of other products in a conceptual space, characterizing the artifact as similar but different. Value is a measure of how the creative product co",
"title": ""
},
{
"docid": "c8453255bf200ed841229f5e637b2074",
"text": "One very simple interpretation of calibration is to adjust a set of parameters associated with a computational science and engineering code so that the model agreement is maximized with respect to a set of experimental data. One very simple interpretation of validation is to quantify our belief in the predictive capability of a computational code through comparison with a set of experimental data. Uncertainty in both the data and the code are important and must be mathematically understood to correctly perform both calibration and validation. Sensitivity analysis, being an important methodology in uncertainty analysis, is thus important to both calibration and validation. In this paper, we intend to clarify the language just used and express some opinions on the associated issues. We will endeavor to identify some technical challenges that must be resolved for successful validation of a predictive modeling capability. One of these challenges is a formal description of a ‘‘model discrepancy’’ term. Another challenge revolves around the general adaptation of abstract learning theory as a formalism that potentially encompasses both calibration and validation in the face of model uncertainty. r 2005 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "c93c0966ef744722d58bbc9170e9a8ab",
"text": "Past research has generated mixed support among social scientists for the utility of social norms in accounting for human behavior. We argue that norms do have a substantial impact on human action; however, the impact can only be properly recognized when researchers (a) separate 2 types of norms that at times act antagonistically in a situation—injunctive norms (what most others approve or disapprove) and descriptive norms (what most others do)—and (b) focus Ss' attention principally on the type of norm being studied. In 5 natural settings, focusing Ss on either the descriptive norms or the injunctive norms regarding littering caused the Ss* littering decisions to change only in accord with the dictates of the then more salient type of norm.",
"title": ""
},
{
"docid": "1b30c14536db1161b77258b1ce213fbb",
"text": "Click-through rate (CTR) prediction and relevance ranking are two fundamental problems in web advertising. In this study, we address the problem of modeling the relationship between CTR and relevance for sponsored search. We used normalized relevance scores comparable across all queries to represent relevance when modeling with CTR, instead of directly using human judgment labels or relevance scores valid only within same query. We classified clicks by identifying their relevance quality using dwell time and session information, and compared all clicks versus selective clicks effects when modeling relevance.\n Our results showed that the cleaned click signal outperforms raw click signal and others we explored, in terms of relevance score fitting. The cleaned clicks include clicks with dwell time greater than 5 seconds and last clicks in session. Besides traditional thoughts that there is no linear relation between click and relevance, we showed that the cleaned click based CTR can be fitted well with the normalized relevance scores using a quadratic regression model. This relevance-click model could help to train ranking models using processed click feedback to complement expensive human editorial relevance labels, or better leverage relevance signals in CTR prediction.",
"title": ""
},
{
"docid": "2bdf2abea3e137645f53d8a9b36327ad",
"text": "The use of a general-purpose code, COLSYS, is described. The code is capable of solving mixed-order systems of boundary-value problems in ordinary differential equations. The method of spline collocation at Gaussian points is implemented using a B-spline basis. Approximate solutions are computed on a sequence of automatically selected meshes until a user-specified set of tolerances is satisfied. A damped Newton's method is used for the nonlinear iteration. The code has been found to be particularly effective for difficult problems. It is intended that a user be able to use COLSYS easily after reading its algorithm description. The use of the code is then illustrated by examples demonstrating its effectiveness and capabilities.",
"title": ""
},
{
"docid": "2df35b05a40a646ba6f826503955601a",
"text": "This paper describes a new prototype system for detecting the demeanor of patients in emergency situations using the Intel RealSense camera system [1]. It describes how machine learning, a support vector machine (SVM) and the RealSense facial detection system can be used to track patient demeanour for pain monitoring. In a lab setting, the application has been trained to detect four different intensities of pain and provide demeanour information about the patient's eyes, mouth, and agitation state. Its utility as a basis for evaluating the condition of patients in situations using video, machine learning and 5G technology is discussed.",
"title": ""
},
{
"docid": "57d40d18977bc332ba16fce1c3cf5a66",
"text": "Deep neural networks are now rivaling human accuracy in several pattern recognition problems. Compared to traditional classifiers, where features are handcrafted, neural networks learn increasingly complex features directly from the data. Instead of handcrafting the features, it is now the network architecture that is manually engineered. The network architecture parameters such as the number of layers or the number of filters per layer and their interconnections are essential for good performance. Even though basic design guidelines exist, designing a neural network is an iterative trial-and-error process that takes days or even weeks to perform due to the large datasets used for training. In this paper, we present DeepEyes, a Progressive Visual Analytics system that supports the design of neural networks during training. We present novel visualizations, supporting the identification of layers that learned a stable set of patterns and, therefore, are of interest for a detailed analysis. The system facilitates the identification of problems, such as superfluous filters or layers, and information that is not being captured by the network. We demonstrate the effectiveness of our system through multiple use cases, showing how a trained network can be compressed, reshaped and adapted to different problems.",
"title": ""
},
{
"docid": "d741b6f33ccfae0fc8f4a79c5c8aa9cb",
"text": "A nonlinear optimal controller with a fuzzy gain scheduler has been designed and applied to a Line-Of-Sight (LOS) stabilization system. Use of Linear Quadratic Regulator (LQR) theory is an optimal and simple manner of solving many control engineering problems. However, this method cannot be utilized directly for multigimbal LOS systems since they are nonlinear in nature. To adapt LQ controllers to nonlinear systems at least a linearization of the model plant is required. When the linearized model is only valid within the vicinity of an operating point a gain scheduler is required. Therefore, a Takagi-Sugeno Fuzzy Inference System gain scheduler has been implemented, which keeps the asymptotic stability performance provided by the optimal feedback gain approach. The simulation results illustrate that the proposed controller is capable of overcoming disturbances and maintaining a satisfactory tracking performance. Keywords—Fuzzy Gain-Scheduling, Gimbal, Line-Of-Sight Stabilization, LQR, Optimal Control",
"title": ""
},
{
"docid": "2950e3c1347c4adeeb2582046cbea4b8",
"text": "We present Mime, a compact, low-power 3D sensor for unencumbered free-form, single-handed gestural interaction with head-mounted displays (HMDs). Mime introduces a real-time signal processing framework that combines a novel three-pixel time-of-flight (TOF) module with a standard RGB camera. The TOF module achieves accurate 3D hand localization and tracking, and it thus enables motion-controlled gestures. The joint processing of 3D information with RGB image data enables finer, shape-based gestural interaction.\n Our Mime hardware prototype achieves fast and precise 3D gestural control. Compared with state-of-the-art 3D sensors like TOF cameras, the Microsoft Kinect and the Leap Motion Controller, Mime offers several key advantages for mobile applications and HMD use cases: very small size, daylight insensitivity, and low power consumption. Mime is built using standard, low-cost optoelectronic components and promises to be an inexpensive technology that can either be a peripheral component or be embedded within the HMD unit. We demonstrate the utility of the Mime sensor for HMD interaction with a variety of application scenarios, including 3D spatial input using close-range gestures, gaming, on-the-move interaction, and operation in cluttered environments and in broad daylight conditions.",
"title": ""
},
{
"docid": "3fd6a5960d40fa98051f7178b1abb8bd",
"text": "On average, resource-abundant countries have experienced lower growth over the last four decades than their resource-poor counterparts. But the most interesting aspect of the paradox of plenty is not the average effect of natural resources, but its variation. For every Nigeria or Venezuela there is a Norway or a Botswana. Why do natural resources induce prosperity in some countries but stagnation in others? This paper gives an overview of the dimensions along which resource-abundant winners and losers differ. In light of this, it then discusses different theory models of the resource curse, with a particular emphasis on recent developments in political economy.",
"title": ""
}
] | scidocsrr |
850a195fc49bfcc68808dd54c19d3d97 | Energy Saving Additive Neural Network | [
{
"docid": "b059f6d2e9f10e20417f97c05d92c134",
"text": "We present a hybrid analog/digital very large scale integration (VLSI) implementation of a spiking neural network with programmable synaptic weights. The synaptic weight values are stored in an asynchronous Static Random Access Memory (SRAM) module, which is interfaced to a fast current-mode event-driven DAC for producing synaptic currents with the appropriate amplitude values. These currents are further integrated by current-mode integrator synapses to produce biophysically realistic temporal dynamics. The synapse output currents are then integrated by compact and efficient integrate and fire silicon neuron circuits with spike-frequency adaptation and adjustable refractory period and spike-reset voltage settings. The fabricated chip comprises a total of 32 × 32 SRAM cells, 4 × 32 synapse circuits and 32 × 1 silicon neurons. It acts as a transceiver, receiving asynchronous events in input, performing neural computation with hybrid analog/digital circuits on the input spikes, and eventually producing digital asynchronous events in output. Input, output, and synaptic weight values are transmitted to/from the chip using a common communication protocol based on the Address Event Representation (AER). Using this representation it is possible to interface the device to a workstation or a micro-controller and explore the effect of different types of Spike-Timing Dependent Plasticity (STDP) learning algorithms for updating the synaptic weights values in the SRAM module. We present experimental results demonstrating the correct operation of all the circuits present on the chip.",
"title": ""
}
] | [
{
"docid": "6bc2f0ea840e4b14e1340aa0c0bf4f07",
"text": "A low-voltage low-power CMOS operational transconductance amplifier (OTA) with near rail-to-rail output swing is presented in this brief. The proposed circuit is based on the current-mirror OTA topology. In addition, several circuit techniques are adopted to enhance the voltage gain. Simulated from a 0.8-V supply voltage, the proposed OTA achieves a 62-dB dc gain and a gain–bandwidth product of 160 MHz while driving a 2-pF load. The OTA is designed in a 0.18m CMOS process. The power consumption is 0.25 mW including the common-mode feedback circuit.",
"title": ""
},
{
"docid": "235edeee5ed3a16b88960400d13cb64f",
"text": "Product service systems (PSS) can be understood as an innovation / business strategy that includes a set of products and services that are realized by an actor network. More recently, PSS that comprise System of Systems (SoS) have been of increasing interest, notably in the transportation (autonomous vehicle infrastructures, multi-modal transportation) and energy sector (smart grids). Architecting such PSS-SoS goes beyond classic SoS engineering, as they are often driven by new technology, without an a priori client and actor network, and thus, a much larger number of potential architectures. However, it seems that neither the existing PSS nor SoS literature provides solutions to how to architect such PSS. This paper presents a methodology for architecting PSS-SoS that are driven by technological innovation. The objective is to design PSS-SoS architectures together with their value proposition and business model from an initial technology impact assessment. For this purpose, we adapt approaches from the strategic management, business modeling, PSS and SoS architecting literature. We illustrate the methodology by applying it to the case of an automobile PSS.",
"title": ""
},
{
"docid": "cdd3dd7a367027ebfe4b3f59eca99267",
"text": "3 Computation of the shearlet transform 13 3.1 Finite discrete shearlets . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14 3.2 A discrete shearlet frame . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16 3.3 Inversion of the shearlet transform . . . . . . . . . . . . . . . . . . . . . . . . . 20 3.4 Smooth shearlets . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21 3.5 Implementation details . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26 3.5.1 Indexing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26 3.5.2 Computation of spectra . . . . . . . . . . . . . . . . . . . . . . . . . . . 27 3.6 Short documentation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29 3.7 Download & Installation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30 3.8 Performance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31 3.9 Remarks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32",
"title": ""
},
{
"docid": "a3da533f428b101c8f8cb0de04546e48",
"text": "In this paper we investigate the challenging problem of cursive text recognition in natural scene images. In particular, we have focused on isolated Urdu character recognition in natural scenes that could not be handled by tradition Optical Character Recognition (OCR) techniques developed for Arabic and Urdu scanned documents. We also present a dataset of Urdu characters segmented from images of signboards, street scenes, shop scenes and advertisement banners containing Urdu text. A variety of deep learning techniques have been proposed by researchers for natural scene text detection and recognition. In this work, a Convolutional Neural Network (CNN) is applied as a classifier, as CNN approaches have been reported to provide high accuracy for natural scene text detection and recognition. A dataset of manually segmented characters was developed and deep learning based data augmentation techniques were applied to further increase the size of the dataset. The training is formulated using filter sizes of 3x3, 5x5 and mixed 3x3 and 5x5 with a stride value of 1 and 2. The CNN model is trained with various learning rates and state-of-the-art results are achieved.",
"title": ""
},
{
"docid": "d81a5fd44adc6825e18e3841e4e66291",
"text": "We study compression techniques for parallel in-memory graph algorithms, and show that we can achieve reduced space usage while obtaining competitive or improved performance compared to running the algorithms on uncompressed graphs. We integrate the compression techniques into Ligra, a recent shared-memory graph processing system. This system, which we call Ligra+, is able to represent graphs using about half of the space for the uncompressed graphs on average. Furthermore, Ligra+ is slightly faster than Ligra on average on a 40-core machine with hyper-threading. Our experimental study shows that Ligra+ is able to process graphs using less memory, while performing as well as or faster than Ligra.",
"title": ""
},
{
"docid": "184402cd0ef80ae3426fd36fbb2ec998",
"text": "Hundreds of hours of videos are uploaded every minute on YouTube and other video sharing sites: some will be viewed by millions of people and other will go unnoticed by all but the uploader. In this paper we propose to use visual sentiment and content features to predict the popularity of web videos. The proposed approach outperforms current state-of-the-art methods on two publicly available datasets.",
"title": ""
},
{
"docid": "2da84ca7d7db508a6f9a443f2dbae7c1",
"text": "This paper proposes a computationally efficient approach to detecting objects natively in 3D point clouds using convolutional neural networks (CNNs). In particular, this is achieved by leveraging a feature-centric voting scheme to implement novel convolutional layers which explicitly exploit the sparsity encountered in the input. To this end, we examine the trade-off between accuracy and speed for different architectures and additionally propose to use an L1 penalty on the filter activations to further encourage sparsity in the intermediate representations. To the best of our knowledge, this is the first work to propose sparse convolutional layers and L1 regularisation for efficient large-scale processing of 3D data. We demonstrate the efficacy of our approach on the KITTI object detection benchmark and show that VoteSDeep models with as few as three layers outperform the previous state of the art in both laser and laser-vision based approaches by margins of up to 40% while remaining highly competitive in terms of processing time.",
"title": ""
},
{
"docid": "d864cc5603c97a8ff3c070dd385fe3a8",
"text": "Nowadays, different protocols coexist in Internet that provides services to users. Unfortunately, control decisions and distributed management make it hard to control networks. These problems result in an inefficient and unpredictable network behaviour. Software Defined Networks (SDN) is a new concept of network architecture. It intends to be more flexible and to simplify the management in networks with respect to traditional architectures. Each of these aspects are possible because of the separation of control plane (controller) and data plane (switches) in network devices. OpenFlow is the most common protocol for SDN networks that provides the communication between control and data planes. Moreover, the advantage of decoupling control and data planes enables a quick evolution of protocols and also its deployment without replacing data plane switches. In this survey, we review the SDN technology and the OpenFlow protocol and their related works. Specifically, we describe some technologies as Wireless Sensor Networks and Wireless Cellular Networks and how SDN can be included within them in order to solve their challenges. We classify different solutions for each technology attending to the problem that is being fixed.",
"title": ""
},
{
"docid": "8674128201d80772040446f1ab6a7cd1",
"text": "In this paper, we present an attribute graph grammar for image parsing on scenes with man-made objects, such as buildings, hallways, kitchens, and living moms. We choose one class of primitives - 3D planar rectangles projected on images and six graph grammar production rules. Each production rule not only expands a node into its components, but also includes a number of equations that constrain the attributes of a parent node and those of its children. Thus our graph grammar is context sensitive. The grammar rules are used recursively to produce a large number of objects and patterns in images and thus the whole graph grammar is a type of generative model. The inference algorithm integrates bottom-up rectangle detection which activates top-down prediction using the grammar rules. The final results are validated in a Bayesian framework. The output of the inference is a hierarchical parsing graph with objects, surfaces, rectangles, and their spatial relations. In the inference, the acceptance of a grammar rule means recognition of an object, and actions are taken to pass the attributes between a node and its parent through the constraint equations associated with this production rule. When an attribute is passed from a child node to a parent node, it is called bottom-up, and the opposite is called top-down",
"title": ""
},
{
"docid": "3755f56410365a498c3a1ff4b61e77de",
"text": "Both high switching frequency and high efficiency are critical in reducing power adapter size. The active clamp flyback (ACF) topology allows zero voltage soft switching (ZVS) under all line and load conditions, eliminates leakage inductance and snubber losses, and enables high frequency and high power density power conversion. Traditional ACF ZVS operation relies on the resonance between leakage inductance and a small primary-side clamping capacitor, which leads to increased rms current and high conduction loss. This also causes oscillatory output rectifier current and impedes the implementation of synchronous rectification. This paper proposes a secondary-side resonance scheme to shape the primary current waveform in a way that significantly improves synchronous rectifier operation and reduces primary rms current. The concept is verified with a ${\\mathbf{25}}\\hbox{--}{\\text{W/in}}^{3}$ high-density 45-W adapter prototype using a monolithic gallium nitride power IC. Over 93% full-load efficiency was demonstrated at the worst case 90-V ac input and maximum full-load efficiency was 94.5%.",
"title": ""
},
{
"docid": "cc4548925973baa6220ad81082a93c86",
"text": "Usually benefits for transportation investments are analysed within a framework of cost-benefit analysis or its related techniques such as financial analysis, cost-effectiveness analysis, life-cycle costing, economic impact analysis, and others. While these tools are valid techniques in general, their application to intermodal transportation would underestimate the overall economic impact by missing important aspects of productivity enhancement. Intermodal transportation is an example of the so-called general purpose technologies (GPTs) that are characterized by statistically significant spillover effects. Diffusion, secondary innovations, and increased demand for specific human capital are basic features of GPTs. Eventually these features affect major macroeconomic variables, especially productivity. Recent economic literature claims that in order to study GPTs, micro and macro evidence should be combined to establish a better understanding of the connecting mechanisms from the micro level to the overall performance of an economy or the macro level. This study analyses these issues with respect to intermodal transportation. The goal is to understand the basic micro and macro mechanisms behind intermodal transportation in order to further develop a rigorous framework for evaluation of benefits from intermodal transportation. In doing so, lessons from computer simulation of the basic features of intermodal transportation are discussed and conclusions are made regarding an agenda for work in the field. 1 Dr. Yuri V. Yevdokimov, Assistant Professor of Economics and Civil Engineering, University of New Brunswick, Canada, Tel. (506) 447-3221, Fax (506) 453-4514, E-mail: [email protected] Introduction Intermodal transportation can be thought of as a process for transporting freight and passengers by means of a system of interconnected networks, involving various combinations of modes of transportation, in which all of the components are seamlessly linked and efficiently combined. Intermodal transportation is rapidly gaining acceptance as an integral component of the systems approach of conducting business in an increasingly competitive and interdependent global economy. For example, the United States Code with respect to transportation states: AIt is the policy of the United States Government to develop a National Intermodal Transportation System that is economically efficient and environmentally sound, provides the foundation for the United States to compete in the global economy and will move individuals and property in an energy efficient way. The National Intermodal Transportation System shall consist of all forms of transportation in a unified, interconnected manner, including the transportation systems of the future, to reduce energy consumption and air pollution while promoting economic development and supporting the United States= pre-eminent position in international commerce.@ (49 USC, Ch. 55, Sec. 5501, 1998) David Collenette (1997), the Transport Minister of Canada, noted: AWith population growth came development, and the relative advantages and disadvantages of the different modes changed as the transportation system became more advanced.... Intermodalism today is about safe, efficient transportation by the most appropriate combination of modes.@ (The Summit on North American Intermodal Transportation, 1997) These statements define intermodal transportation as a macroeconomic concept, because an effective transportation system is a vital factor in assuring the efficiency of an economic system as a whole. Moreover, intermodal transportation is an important socio-economic phenomenon which implies that the benefits of intermodal transportation have to be evaluated at the macroeconomic level, or at least at the regional level, involving all elements of the economic system that gain from having a more efficient transportation network in place. Defining Economic Benefits of Intermodal Transportation Traditionally, the benefits of a transportation investment have been primarily evaluated through reduced travel time and reduced vehicle maintenance and operation costs. However, according to Weisbrod and Treyz (1998), such methods underestimate the total benefits of transportation investment by Amissing other important aspects of productivity enhancement.@ It is so because transportation does not have an intrinsic purpose in itself and is rather intended to enable other economic activities such as production, consumption, leisure, and dissemination of knowledge to take place. Hence, in order to measure total economic benefits of investing in intermodal transportation, it is necessary to understand their basic relationships with different economic activities. Eventually, improvements in transportation reduce transportation costs. The immediate benefit of the reduction is the fall in total cost of production in an economic system under study which results in growth of the system=s output. This conclusion has been known in economic development literature since Tinbergen=s paper in 1957 (Tinbergen, 1957). However, the literature does not explicitly identify why transportation costs will fall. This issue is addressed in this discussion with respect to intermodal transportation. Transportation is a multiple service to multiple users. It is produced in transportation networks that provide infrastructure for economic activities. It appears that transportation networks have economies of scale. As discussed below, intermodal transportation magnifies these scale effects resulting in increasing returns to scale (IRS) of a specific nature. It implies that there are positive externalities that arise because of the scale effects, externalities that can initiate cumulative economic growth at the regional level as well as at the national level (see, for example, Brathen and Hervick, 1997, and Hussain and Westin, 1997). The phenomenon is known as a spill-over effect. Previously the effect has been evaluated through the contribution of transportation infrastructure investment to economic growth. Since Auschauer=s (1989) paper many economists have found evidence of such a contribution (see, for example, Bonaglia and Ferrara, 2000 and Khanam, 1996). Intermodal transportation as it was defined at the very beginning is more than mere improvements in transportation infrastructure. From a theoretical standpoint, it posseses some characteristics of the general-purpose technologies (GPT), and it seems appropriate to regard it as an example of the GPT, which is discussed below. It appears reasonable to study intermodal transportation as a two-way improvement of an economic system=s productivity. On the one hand, it improves current operational functions of the system. On the other hand, it expands those functions. Both improvements are achieved by consolidating different transportation systems into a seamless transportation network that utilizes the comparative advantages of different transportation modes. Improvements due to intermodal transportation are associated with the increased productivity of transportation services and a reduction in logistic costs. The former results in an increased volume of transportation per unit cost, while the latter directly reduces costs of commodity production. Expansion of the intermodal transportation network is associated with economies of scale and better accessibility to input and output markets. The overall impact of intermodal transportation can be divided into four elements: (i) an increase in the volume of transportation in an existing transportation network; (ii) a reduction in logistic costs of current operations; (iii) the economies of scale associated with transportation network expansion; (iv) better accessibility to input and output markets. These four elements are discussed below in a sequence. Increase in volume of transportation in the existing network An increase in volume of transportation can lead to economies of density a specific scale effect. The economies of density exist if an increase in the volume of transportation in the network does not require a proportional increase in all inputs of the network. Usually the phenomenon is associated with an increase in the frequency of transportation (traffic) within the existing network (see Boyer, 1998 for a formal definition, Ciccone and Hall, 1996 for general discussion of economies of density, and Fujii, Im and Mak, 1992 for examples of economies of density in transportation). In the case of intermodal transportation, economies of density are achieved through cargo containerization, cargo consolidation and computer-guiding systems at intermodal facilities. Cargo containerization and consolidation result in an increased load factor of transportation vehicles and higher capacity utilization of the transportation fixed facilities, while utilization of computer-guiding systems results in higher labour productivity. For instance, in 1994 Burlington Northern Santa Fe Railway (BNSF) introduced the Alliance Intermodal Facility at Fort Worth, Texas, into its operations between Chicago and Los Angeles. According to OmniTRAX specialists, who operates the facility, BNSF has nearly doubled its volume of throughput at the intermodal facility since 1994. First, containerization of commodities being transported plus hubbing or cargo consolidation at the intermodal facility resulted in longer trains with higher frequency. Second, all day-to-day operations at the intermodal facility are governed by the Optimization Alternatives Strategic Intermodal Scheduler (OASIS) computer system, which allowed BNSF to handle more operations with less labour. Reduction in Logistic Costs Intermodal transportation is characterized by optimal frequency of service and modal choice and increased reliability. Combined, these two features define the just-in-time delivery -a major service produced by intermodal transportation. Furthermore, Blackburn (1991) argues that just-in-time d",
"title": ""
},
{
"docid": "a926341e8b663de6c412b8e3a61ee171",
"text": "— Studies within the EHEA framework include the acquisition of skills such as the ability to learn autonomously, which requires students to devote much of their time to individual and group work to reinforce and further complement the knowledge acquired in the classroom. In order to consolidate the results obtained from classroom activities, lecturers must develop tools to encourage learning and facilitate the process of independent learning. The aim of this work is to present the use of virtual laboratories based on Easy Java Simulations to assist in the understanding and testing of electrical machines. con los usuarios integrándose fácilmente en plataformas de e-aprendizaje. Para nuestra aplicación hemos escogido el Java Ejs (Easy Java Simulations), ya que es una herramienta de software gratuita, diseñada para el desarrollo de laboratorios virtuales interactivos, dispone de elementos visuales parametrizables",
"title": ""
},
{
"docid": "5c935db4a010bc26d93dd436c5e2f978",
"text": "A taxonomic revision of Australian Macrobrachium identified three species new to the Australian fauna – two undescribed species and one new record, viz. M. auratumsp. nov., M. koombooloombasp. nov., and M. mammillodactylus(Thallwitz, 1892). Eight taxa previously described by Riek (1951) are recognised as new junior subjective synonyms, viz. M. adscitum adscitum, M. atactum atactum, M. atactum ischnomorphum, M. atactum sobrinum, M. australiense crassum, M. australiense cristatum, M. australiense eupharum of M. australienseHolthuis, 1950, and M. glypticumof M. handschiniRoux, 1933. Apart from an erroneous type locality for a junior subjective synonym, there were no records to confirm the presence of M. australe(Guérin-Méneville, 1838) on the Australian continent. In total, 13 species of Macrobrachiumare recorded from the Australian continent. Keys to male developmental stages and Australian species are provided. A revised diagnosis is given for the genus. A list of 31 atypical species which do not appear to be based on fully developed males or which require re-evaluation of their generic status is provided. Terminology applied to spines and setae is revised.",
"title": ""
},
{
"docid": "e2d39e2714351b04054b871fa8a7a2fa",
"text": "In this letter, we propose sparsity-based coherent and noncoherent dictionaries for action recognition. First, the input data are divided into different clusters and the number of clusters depends on the number of action categories. Within each cluster, we seek data items of each action category. If the number of data items exceeds threshold in any action category, these items are labeled as coherent. In a similar way, all coherent data items from different clusters form a coherent group of each action category, and data that are not part of the coherent group belong to noncoherent group of each action category. These coherent and noncoherent groups are learned using K-singular value decomposition dictionary learning. Since the coherent group has more similarity among data, only few atoms need to be learned. In the noncoherent group, there is a high variability among the data items. So, we propose an orthogonal-projection-based selection to get optimal dictionary in order to retain maximum variance in the data. Finally, the obtained dictionary atoms of both groups in each action category are combined and then updated using the limited Broyden–Fletcher–Goldfarb–Shanno optimization algorithm. The experiments are conducted on challenging datasets HMDB51 and UCF50 with action bank features and achieve comparable result using this state-of-the-art feature.",
"title": ""
},
{
"docid": "56e47efe6efdb7819c6a2e87e8fbb56e",
"text": "Recent investigations of Field Programmable Gate Array (FPGA)-based time-to-digital converters (TDCs) have predominantly focused on improving the time resolution of the device. However, the monolithic integration of multi-channel TDCs and the achievement of high measurement throughput remain challenging issues for certain applications. In this paper, the potential of the resources provided by the Kintex-7 Xilinx FPGA is fully explored, and a new design is proposed for the implementation of a high performance multi-channel TDC system on this FPGA. Using the tapped-delay-line wave union TDC architecture, in which a negative pulse is triggered by the hit signal propagating along the carry chain, two time measurements are performed in a single carry chain within one clock cycle. The differential non-linearity and time resolution can be significantly improved by realigning the bins. The on-line calibration and on-line updating of the calibration table reduce the influence of variations of environmental conditions. The logic resources of the 6-input look-up tables in the FPGA are employed for hit signal edge detection and bubble-proof encoding, thereby allowing the TDC system to operate at the maximum allowable clock rate of the FPGA and to achieve the maximum possible measurement throughput. This resource-efficient design, in combination with a modular implementation, makes the integration of multiple channels in one FPGA practicable. Using our design, a 128-channel TDC with a dead time of 1.47 ns, a dynamic range of 360 ns, and a root-mean-square resolution of less than 10 ps was implemented in a single Kintex-7 device.",
"title": ""
},
{
"docid": "b06fc6126bf086cdef1d5ac289cf5ebe",
"text": "Rhinophyma is a subtype of rosacea characterized by nodular thickening of the skin, sebaceous gland hyperplasia, dilated pores, and in its late stage, fibrosis. Phymatous changes in rosacea are most common on the nose but can also occur on the chin (gnatophyma), ears (otophyma), and eyelids (blepharophyma). In severe cases, phymatous changes result in the loss of normal facial contours, significant disfigurement, and social isolation. Additionally, patients with profound rhinophyma can experience nare obstruction and difficulty breathing due to the weight and bulk of their nose. Treatment options for severe advanced rhinophyma include cryosurgery, partial-thickness decortication with subsequent secondary repithelialization, carbon dioxide (CO2) or erbium-doped yttrium aluminum garnet (Er:YAG) laser ablation, full-thickness resection with graft or flap reconstruction, excision by electrocautery or radio frequency, and sculpting resection using a heated Shaw scalpel. We report a severe case of rhinophyma resulting in marked facial disfigurement and nasal obstruction treated successfully using the Shaw scalpel. Rhinophymectomy using the Shaw scalpel allows for efficient and efficacious treatment of rhinophyma without the need for multiple procedures or general anesthesia and thus should be considered in patients with nare obstruction who require intervention.",
"title": ""
},
{
"docid": "3c29c0a3e8ec6292f05c7907436b5e9a",
"text": "Emerging Wi-Fi technologies are expected to cope with large amounts of traffic in dense networks. Consequently, proposals for the future IEEE 802.11ax Wi-Fi amendment include sensing threshold and transmit power adaptation, in order to improve spatial reuse. However, it is not yet understood to which extent such adaptive approaches — and which variant — would achieve a better balance between spatial reuse and the level of interference, in order to improve the network performance. Moreover, it is not clear how legacy Wi-Fi devices would be affected by new-generation Wi-Fi implementing these adaptive design parameters. In this paper we present a thorough comparative study in ns-3 for four major proposed adaptation algorithms and we compare their performance against legacy non-adaptive Wi-Fi. Additionally, we consider mixed populations where both legacy non-adaptive and new-generation adaptive populations coexist. We assume a dense indoor residential deployment and different numbers of available channels in the 5 GHz band, relevant for future IEEE 802.11ax. Our results show that for the dense scenarios considered, the algorithms do not significantly improve the overall network performance compared to the legacy baseline, as they increase the throughput of some nodes, while decreasing the throughput of others. For mixed populations in dense deployments, adaptation algorithms that improve the performance of new-generation nodes degrade the performance of legacy nodes and vice versa. This suggests that to support Wi-Fi evolution for dense deployments and consistently increase the throughput throughout the network, more sophisticated algorithms are needed, e.g. considering combinations of input parameters in current variants.",
"title": ""
},
{
"docid": "eb3eccf745937773c399334673235f57",
"text": "Continuous practices, i.e., continuous integration, delivery, and deployment, are the software development industry practices that enable organizations to frequently and reliably release new features and products. With the increasing interest in the literature on continuous practices, it is important to systematically review and synthesize the approaches, tools, challenges, and practices reported for adopting and implementing continuous practices. This paper aimed at systematically reviewing the state of the art of continuous practices to classify approaches and tools, identify challenges and practices in this regard, and identify the gaps for future research. We used the systematic literature review method for reviewing the peer-reviewed papers on continuous practices published between 2004 and June 1, 2016. We applied the thematic analysis method for analyzing the data extracted from reviewing 69 papers selected using predefined criteria. We have identified 30 approaches and associated tools, which facilitate the implementation of continuous practices in the following ways: 1) reducing build and test time in continuous integration (CI); 2) increasing visibility and awareness on build and test results in CI; 3) supporting (semi-) automated continuous testing; 4) detecting violations, flaws, and faults in CI; 5) addressing security and scalability issues in deployment pipeline; and 6) improving dependability and reliability of deployment process. We have also determined a list of critical factors, such as testing (effort and time), team awareness and transparency, good design principles, customer, highly skilled and motivated team, application domain, and appropriate infrastructure that should be carefully considered when introducing continuous practices in a given organization. The majority of the reviewed papers were validation (34.7%) and evaluation (36.2%) research types. This paper also reveals that continuous practices have been successfully applied to both greenfield and maintenance projects. Continuous practices have become an important area of software engineering research and practice. While the reported approaches, tools, and practices are addressing a wide range of challenges, there are several challenges and gaps, which require future research work for improving the capturing and reporting of contextual information in the studies reporting different aspects of continuous practices; gaining a deep understanding of how software-intensive systems should be (re-) architected to support continuous practices; and addressing the lack of knowledge and tools for engineering processes of designing and running secure deployment pipelines.",
"title": ""
},
{
"docid": "a9dbb873487081afcc2a24dd7cb74bfe",
"text": "We presented the first single block collision attack on MD5 with complexity of 2 MD5 compressions and posted the challenge for another completely new one in 2010. Last year, Stevens presented a single block collision attack to our challenge, with complexity of 2 MD5 compressions. We really appreciate Stevens’s hard work. However, it is a pity that he had not found even a better solution than our original one, let alone a completely new one and the very optimal solution that we preserved and have been hoping that someone can find it, whose collision complexity is about 2 MD5 compressions. In this paper, we propose a method how to choose the optimal input difference for generating MD5 collision pairs. First, we divide the sufficient conditions into two classes: strong conditions and weak conditions, by the degree of difficulty for condition satisfaction. Second, we prove that there exist strong conditions in only 24 steps (one and a half rounds) under specific conditions, by utilizing the weaknesses of compression functions of MD5, which are difference inheriting and message expanding. Third, there should be no difference scaling after state word q25 so that it can result in the least number of strong conditions in each differential path, in such a way we deduce the distribution of strong conditions for each input difference pattern. Finally, we choose the input difference with the least number of strong conditions and the most number of free message words. We implement the most efficient 2-block MD5 collision attack, which needs only about 2 MD5 compressions to find a collision pair, and show a single-block collision attack with complexity 2.",
"title": ""
},
{
"docid": "cb66a49205c9914be88a7631ecc6c52a",
"text": "BACKGROUND\nMidline facial clefts are rare and challenging deformities caused by failure of fusion of the medial nasal prominences. These anomalies vary in severity, and may include microform lines or midline lip notching, incomplete or complete labial clefting, nasal bifidity, or severe craniofacial bony and soft tissue anomalies with orbital hypertelorism and frontoethmoidal encephaloceles. In this study, the authors present 4 cases, classify the spectrum of midline cleft anomalies, and review our technical approaches to the surgical correction of midline cleft lip and bifid nasal deformities. Embryology and associated anomalies are discussed.\n\n\nMETHODS\nThe authors retrospectively reviewed our experience with 4 cases of midline cleft lip with and without nasal deformities of varied complexity. In addition, a comprehensive literature search was performed, identifying studies published relating to midline cleft lip and/or bifid nose deformities. Our assessment of the anomalies in our series, in conjunction with published reports, was used to establish a 5-tiered classification system. Technical approaches and clinical reports are described.\n\n\nRESULTS\nFunctional and aesthetic anatomic correction was successfully achieved in each case without complication. A classification and treatment strategy for the treatment of midline cleft lip and bifid nose deformity is presented.\n\n\nCONCLUSIONS\nThe successful treatment of midline cleft lip and bifid nose deformities first requires the identification and classification of the wide variety of anomalies. With exposure of abnormal nasolabial anatomy, the excision of redundant skin and soft tissue, anatomic approximation of cartilaginous elements, orbicularis oris muscle repair, and craniofacial osteotomy and reduction as indicated, a single-stage correction of midline cleft lip and bifid nasal deformity can be safely and effectively achieved.",
"title": ""
}
] | scidocsrr |
c63c27c7e3e2d176948b319b9c2257e2 | Regularizing Relation Representations by First-order Implications | [
{
"docid": "6a7bfed246b83517655cb79a951b1f48",
"text": "Hypernymy, textual entailment, and image captioning can be seen as special cases of a single visual-semantic hierarchy over words, sentences, and images. In this paper we advocate for explicitly modeling the partial order structure of this hierarchy. Towards this goal, we introduce a general method for learning ordered representations, and show how it can be applied to a variety of tasks involving images and language. We show that the resulting representations improve performance over current approaches for hypernym prediction and image-caption retrieval.",
"title": ""
},
{
"docid": "cc15583675d6b19fbd9a10f06876a61e",
"text": "Matrix factorization approaches to relation extraction provide several attractive features: they support distant supervision, handle open schemas, and leverage unlabeled data. Unfortunately, these methods share a shortcoming with all other distantly supervised approaches: they cannot learn to extract target relations without existing data in the knowledge base, and likewise, these models are inaccurate for relations with sparse data. Rule-based extractors, on the other hand, can be easily extended to novel relations and improved for existing but inaccurate relations, through first-order formulae that capture auxiliary domain knowledge. However, usually a large set of such formulae is necessary to achieve generalization. In this paper, we introduce a paradigm for learning low-dimensional embeddings of entity-pairs and relations that combine the advantages of matrix factorization with first-order logic domain knowledge. We introduce simple approaches for estimating such embeddings, as well as a novel training algorithm to jointly optimize over factual and first-order logic information. Our results show that this method is able to learn accurate extractors with little or no distant supervision alignments, while at the same time generalizing to textual patterns that do not appear in the formulae.",
"title": ""
},
{
"docid": "5d2eda181896eadbc56b9d12315062b4",
"text": "Corpus-based distributional semantic models capture degrees of semantic relatedness among the words of very large vocabularies, but have problems with logical phenomena such as entailment, that are instead elegantly handled by model-theoretic approaches, which, in turn, do not scale up. We combine the advantages of the two views by inducing a mapping from distributional vectors of words (or sentences) into a Boolean structure of the kind in which natural language terms are assumed to denote. We evaluate this Boolean Distributional Semantic Model (BDSM) on recognizing entailment between words and sentences. The method achieves results comparable to a state-of-the-art SVM, degrades more gracefully when less training data are available and displays interesting qualitative properties.",
"title": ""
}
] | [
{
"docid": "f67e221a12e0d8ebb531a1e7c80ff2ff",
"text": "Fine-grained image classification is to recognize hundreds of subcategories belonging to the same basic-level category, such as 200 subcategories belonging to the bird, which is highly challenging due to large variance in the same subcategory and small variance among different subcategories. Existing methods generally first locate the objects or parts and then discriminate which subcategory the image belongs to. However, they mainly have two limitations: 1) relying on object or part annotations which are heavily labor consuming; and 2) ignoring the spatial relationships between the object and its parts as well as among these parts, both of which are significantly helpful for finding discriminative parts. Therefore, this paper proposes the object-part attention model (OPAM) for weakly supervised fine-grained image classification and the main novelties are: 1) object-part attention model integrates two level attentions: object-level attention localizes objects of images, and part-level attention selects discriminative parts of object. Both are jointly employed to learn multi-view and multi-scale features to enhance their mutual promotion; and 2) Object-part spatial constraint model combines two spatial constraints: object spatial constraint ensures selected parts highly representative and part spatial constraint eliminates redundancy and enhances discrimination of selected parts. Both are jointly employed to exploit the subtle and local differences for distinguishing the subcategories. Importantly, neither object nor part annotations are used in our proposed approach, which avoids the heavy labor consumption of labeling. Compared with more than ten state-of-the-art methods on four widely-used datasets, our OPAM approach achieves the best performance.",
"title": ""
},
{
"docid": "302acca2245a0d97cfec06a92dfc1a71",
"text": "concepts of tissue-integrated prostheses with remarkable functional advantages, innovations have resulted in dental implant solutions spanning the spectrum of dental needs. Current discussions concerning the relative merit of an implant versus a 3-unit fixed partial denture fully illustrate the possibility that single implants represent a bona fide choice for tooth replacement. Interestingly, when delving into the detailed comparisons between the outcomes of single-tooth implant versus fixed partial dentures or the intentional replacement of a failing tooth with an implant instead of restoration involving root canal therapy, little emphasis has been placed on the relative esthetic merits of one or another therapeutic approach to tooth replacement therapy. An ideal prosthesis should fully recapitulate or enhance the esthetic features of the tooth or teeth it replaces. Although it is clearly beyond the scope of this article to compare the various methods of esthetic tooth replacement, there is, perhaps, sufficient space to share some insights regarding an objective approach to planning, executing and evaluating the esthetic merit of single-tooth implant restorations.",
"title": ""
},
{
"docid": "9c3050cca4deeb2d94ae5cff883a2d68",
"text": "High speed, low latency obstacle avoidance is essential for enabling Micro Aerial Vehicles (MAVs) to function in cluttered and dynamic environments. While other systems exist that do high-level mapping and 3D path planning for obstacle avoidance, most of these systems require high-powered CPUs on-board or off-board control from a ground station. We present a novel entirely on-board approach, leveraging a light-weight low power stereo vision system on FPGA. Our approach runs at a frame rate of 60 frames a second on VGA-sized images and minimizes latency between image acquisition and performing reactive maneuvers, allowing MAVs to fly more safely and robustly in complex environments. We also suggest our system as a light-weight safety layer for systems undertaking more complex tasks, like mapping the environment. Finally, we show our algorithm implemented on a lightweight, very computationally constrained platform, and demonstrate obstacle avoidance in a variety of environments.",
"title": ""
},
{
"docid": "b6fde2eeef81b222c7472f07190d7e5a",
"text": "Combinatorial testing (also called interaction testing) is an effective specification-based test input generation technique. By now most of research work in combinatorial testing aims to propose novel approaches trying to generate test suites with minimum size that still cover all the pairwise, triple, or n-way combinations of factors. Since the difficulty of solving this problem is demonstrated to be NP-hard, existing approaches have been designed to generate optimal or near optimal combinatorial test suites in polynomial time. In this paper, we try to apply particle swarm optimization (PSO), a kind of meta-heuristic search technique, to pairwise testing (i.e. a special case of combinatorial testing aiming to cover all the pairwise combinations). To systematically build pairwise test suites, we propose two different PSO based algorithms. One algorithm is based on one-test-at-a-time strategy and the other is based on IPO-like strategy. In these two different algorithms, we use PSO to complete the construction of a single test. To successfully apply PSO to cover more uncovered pairwise combinations in this construction process, we provide a detailed description on how to formulate the search space, define the fitness function and set some heuristic settings. To verify the effectiveness of our approach, we implement these algorithms and choose some typical inputs. In our empirical study, we analyze the impact factors of our approach and compare our approach to other well-known approaches. Final empirical results show the effectiveness and efficiency of our approach.",
"title": ""
},
{
"docid": "9d1dc15130b9810f6232b4a3c77e8038",
"text": "This paper argues that we should seek the golden middle way between dynamically and statically typed languages.",
"title": ""
},
{
"docid": "cf26ade7932ba0c5deb01e4b3d2463bb",
"text": "Researchers are often confused about what can be inferred from significance tests. One problem occurs when people apply Bayesian intuitions to significance testing-two approaches that must be firmly separated. This article presents some common situations in which the approaches come to different conclusions; you can see where your intuitions initially lie. The situations include multiple testing, deciding when to stop running participants, and when a theory was thought of relative to finding out results. The interpretation of nonsignificant results has also been persistently problematic in a way that Bayesian inference can clarify. The Bayesian and orthodox approaches are placed in the context of different notions of rationality, and I accuse myself and others as having been irrational in the way we have been using statistics on a key notion of rationality. The reader is shown how to apply Bayesian inference in practice, using free online software, to allow more coherent inferences from data.",
"title": ""
},
{
"docid": "ff1c454fc735c49325a62909cc0e27e8",
"text": "We describe a framework for characterizing people’s behavior with Digital Live Art. Our framework considers people’s wittingness, technical skill, and interpretive abilities in relation to the performance frame. Three key categories of behavior with respect to the performance frame are proposed: performing, participating, and spectating. We exemplify the use of our framework by characterizing people’s interaction with a DLA iPoi. This DLA is based on the ancient Maori art form of poi and employs a wireless, peer-to-peer exertion interface. The design goal of iPoi is to draw people into the performance frame and support transitions from audience to participant and on to performer. We reflect on iPoi in a public performance and outline its key design features.",
"title": ""
},
{
"docid": "bc955a52d08f192b06844721fcf635a0",
"text": "Total quality management (TQM) has been widely considered as the strategic, tactical and operational tool in the quality management research field. It is one of the most applied and well accepted approaches for business excellence besides Continuous Quality Improvement (CQI), Six Sigma, Just-in-Time (JIT), and Supply Chain Management (SCM) approaches. There is a great enthusiasm among manufacturing and service industries in adopting and implementing this strategy in order to maintain their sustainable competitive advantage. The aim of this study is to develop and propose the conceptual framework and research model of TQM implementation in relation to company performance particularly in context with the Indian service companies. It examines the relationships between TQM and company’s performance by measuring the quality performance as performance indicator. A comprehensive review of literature on TQM and quality performance was carried out to accomplish the objectives of this study and a research model and hypotheses were generated. Two research questions and 34 hypotheses were proposed to re-validate the TQM practices. The adoption of such a theoretical model on TQM and company’s quality performance would help managers, decision makers, and practitioners of TQM in better understanding of the TQM practices and to focus on the identified practices while implementing TQM in their companies. Further, the scope for future study is to test and validate the theoretical model by collecting the primary data from the Indian service companies and using Structural Equation Modeling (SEM) approach for hypotheses testing.",
"title": ""
},
{
"docid": "d10c1659bc8f166c077a658421fbd388",
"text": "Blockchain-based distributed computing platforms enable the trusted execution of computation—defined in the form of smart contracts—without trusted agents. Smart contracts are envisioned to have a variety of applications, ranging from financial to IoT asset tracking. Unfortunately, the development of smart contracts has proven to be extremely error prone. In practice, contracts are riddled with security vulnerabilities comprising a critical issue since bugs are by design nonfixable and contracts may handle financial assets of significant value. To facilitate the development of secure smart contracts, we have created the FSolidM framework, which allows developers to define contracts as finite state machines (FSMs) with rigorous and clear semantics. FSolidM provides an easy-to-use graphical editor for specifying FSMs, a code generator for creating Ethereum smart contracts, and a set of plugins that developers may add to their FSMs to enhance security and functionality.",
"title": ""
},
{
"docid": "d9240bad8516bea63f9340bcde366ee4",
"text": "This paper describes a novel feature selection algorithm for unsupervised clustering, that combines the clustering ensembles method and the population based incremental learning algorithm. The main idea of the proposed unsupervised feature selection algorithm is to search for a subset of all features such that the clustering algorithm trained on this feature subset can achieve the most similar clustering solution to the one obtained by an ensemble learning algorithm. In particular, a clustering solution is firstly achieved by a clustering ensembles method, then the population based incremental learning algorithm is adopted to find the feature subset that best fits the obtained clustering solution. One advantage of the proposed unsupervised feature selection algorithm is that it is dimensionality-unbiased. In addition, the proposed unsupervised feature selection algorithm leverages the consensus across multiple clustering solutions. Experimental results on several real data sets demonstrate that the proposed unsupervised feature selection algorithm is often able to obtain a better feature subset when compared with other existing unsupervised feature selection algorithms.",
"title": ""
},
{
"docid": "2271085513d9239225c9bfb2f6b155b1",
"text": "Information Security has become an important issue in data communication. Encryption algorithms have come up as a solution and play an important role in information security system. On other side, those algorithms consume a significant amount of computing resources such as CPU time, memory and battery power. Therefore it is essential to measure the performance of encryption algorithms. In this work, three encryption algorithms namely DES, AES and Blowfish are analyzed by considering certain performance metrics such as execution time, memory required for implementation and throughput. Based on the experiments, it has been concluded that the Blowfish is the best performing algorithm among the algorithms chosen for implementation.",
"title": ""
},
{
"docid": "aa2bf057322c9a8d2c7d1ce7d6a384d3",
"text": "Our team is currently developing an Automated Cyber Red Teaming system that, when given a model-based capture of an organisation's network, uses automated planning techniques to generate and assess multi-stage attacks. Specific to this paper, we discuss our development of the visual analytic component of this system. Through various views that display network attacks paths at different levels of abstraction, our tool aims to enhance cyber situation awareness of human decision makers.",
"title": ""
},
{
"docid": "00f2bb2dd3840379c2442c018407b1c8",
"text": "BACKGROUND\nFacebook is a social networking site (SNS) for communication, entertainment and information exchange. Recent research has shown that excessive use of Facebook can result in addictive behavior in some individuals.\n\n\nAIM\nTo assess the patterns of Facebook use in post-graduate students of Yenepoya University and evaluate its association with loneliness.\n\n\nMETHODS\nA cross-sectional study was done to evaluate 100 post-graduate students of Yenepoya University using Bergen Facebook Addiction Scale (BFAS) and University of California and Los Angeles (UCLA) loneliness scale version 3. Descriptive statistics were applied. Pearson's bivariate correlation was done to see the relationship between severity of Facebook addiction and the experience of loneliness.\n\n\nRESULTS\nMore than one-fourth (26%) of the study participants had Facebook addiction and 33% had a possibility of Facebook addiction. There was a significant positive correlation between severity of Facebook addiction and extent of experience of loneliness ( r = .239, p = .017).\n\n\nCONCLUSION\nWith the rapid growth of popularity and user-base of Facebook, a significant portion of the individuals are susceptible to develop addictive behaviors related to Facebook use. Loneliness is a factor which influences addiction to Facebook.",
"title": ""
},
{
"docid": "7167964274b05da06beddb1aef119b2c",
"text": "A great variety of systems in nature, society and technology—from the web of sexual contacts to the Internet, from the nervous system to power grids—can be modeled as graphs of vertices coupled by edges. The network structure, describing how the graph is wired, helps us understand, predict and optimize the behavior of dynamical systems. In many cases, however, the edges are not continuously active. As an example, in networks of communication via email, text messages, or phone calls, edges represent sequences of instantaneous or practically instantaneous contacts. In some cases, edges are active for non-negligible periods of time: e.g., the proximity patterns of inpatients at hospitals can be represented by a graph where an edge between two individuals is on throughout the time they are at the same ward. Like network topology, the temporal structure of edge activations can affect dynamics of systems interacting through the network, from disease contagion on the network of patients to information diffusion over an e-mail network. In this review, we present the emergent field of temporal networks, and discuss methods for analyzing topological and temporal structure and models for elucidating their relation to the behavior of dynamical systems. In the light of traditional network theory, one can see this framework as moving the information of when things happen from the dynamical system on the network, to the network itself. Since fundamental properties, such as the transitivity of edges, do not necessarily hold in temporal networks, many of these methods need to be quite different from those for static networks. The study of temporal networks is very interdisciplinary in nature. Reflecting this, even the object of study has many names—temporal graphs, evolving graphs, time-varying graphs, time-aggregated graphs, time-stamped graphs, dynamic networks, dynamic graphs, dynamical graphs, and so on. This review covers different fields where temporal graphs are considered, but does not attempt to unify related terminology—rather, we want to make papers readable across disciplines.",
"title": ""
},
{
"docid": "26f2b200bf22006ab54051c9288420e8",
"text": "Emotion keyword spotting approach can detect emotion well for explicit emotional contents while it obviously cannot compare to supervised learning approaches for detecting emotional contents of particular events. In this paper, we target earthquake situations in Japan as the particular events for emotion analysis because the affected people often show their states and emotions towards the situations via social networking sites. Additionally, tracking crowd emotions in the Internet during the earthquakes can help authorities to quickly decide appropriate assistance policies without paying the cost as the traditional public surveys. Our three main contributions in this paper are: a) the appropriate choice of emotions; b) the novel proposal of two classification methods for determining the earthquake related tweets and automatically identifying the emotions in Twitter; c) tracking crowd emotions during different earthquake situations, a completely new application of emotion analysis research. Our main analysis results show that Twitter users show their Fear and Anxiety right after the earthquakes occurred while Calm and Unpleasantness are not showed clearly during the small earthquakes but in the large tremor.",
"title": ""
},
{
"docid": "0f01dbd1e554ee53ca79258610d835c1",
"text": "Received: 18 March 2009 Revised: 18 March 2009 Accepted: 20 April 2009 Abstract Adaptive visualization is a new approach at the crossroads of user modeling and information visualization. Taking into account information about a user, adaptive visualization attempts to provide user-adapted visual presentation of information. This paper proposes Adaptive VIBE, an approach for adaptive visualization of search results in an intelligence analysis context. Adaptive VIBE extends the popular VIBE visualization framework by infusing user model terms as reference points for spatial document arrangement and manipulation. We explored the value of the proposed approach using data obtained from a user study. The result demonstrated that user modeling and spatial visualization technologies are able to reinforce each other, creating an enhanced level of user support. Spatial visualization amplifies the user model's ability to separate relevant and non-relevant documents, whereas user modeling adds valuable reference points to relevance-based spatial visualization. Information Visualization (2009) 8, 167--179. doi:10.1057/ivs.2009.12",
"title": ""
},
{
"docid": "443637fcc9f9efcf1026bb64aa0a9c97",
"text": "Given the unprecedented availability of data and computing resources, there is widespread renewed interest in applying data-driven machine learning methods to problems for which the development of conventional engineering solutions is challenged by modeling or algorithmic deficiencies. This tutorial-style paper starts by addressing the questions of why and when such techniques can be useful. It then provides a high-level introduction to the basics of supervised and unsupervised learning. For both supervised and unsupervised learning, exemplifying applications to communication networks are discussed by distinguishing tasks carried out at the edge and at the cloud segments of the network at different layers of the protocol stack, with an emphasis on the physical layer.",
"title": ""
},
{
"docid": "d60f812bb8036a2220dab8740f6a74c4",
"text": "UNLABELLED\nThe limit of the Colletotrichum gloeosporioides species complex is defined genetically, based on a strongly supported clade within the Colletotrichum ITS gene tree. All taxa accepted within this clade are morphologically more or less typical of the broadly defined C. gloeosporioides, as it has been applied in the literature for the past 50 years. We accept 22 species plus one subspecies within the C. gloeosporioides complex. These include C. asianum, C. cordylinicola, C. fructicola, C. gloeosporioides, C. horii, C. kahawae subsp. kahawae, C. musae, C. nupharicola, C. psidii, C. siamense, C. theobromicola, C. tropicale, and C. xanthorrhoeae, along with the taxa described here as new, C. aenigma, C. aeschynomenes, C. alatae, C. alienum, C. aotearoa, C. clidemiae, C. kahawae subsp. ciggaro, C. salsolae, and C. ti, plus the nom. nov. C. queenslandicum (for C. gloeosporioides var. minus). All of the taxa are defined genetically on the basis of multi-gene phylogenies. Brief morphological descriptions are provided for species where no modern description is available. Many of the species are unable to be reliably distinguished using ITS, the official barcoding gene for fungi. Particularly problematic are a set of species genetically close to C. musae and another set of species genetically close to C. kahawae, referred to here as the Musae clade and the Kahawae clade, respectively. Each clade contains several species that are phylogenetically well supported in multi-gene analyses, but within the clades branch lengths are short because of the small number of phylogenetically informative characters, and in a few cases individual gene trees are incongruent. Some single genes or combinations of genes, such as glyceraldehyde-3-phosphate dehydrogenase and glutamine synthetase, can be used to reliably distinguish most taxa and will need to be developed as secondary barcodes for species level identification, which is important because many of these fungi are of biosecurity significance. In addition to the accepted species, notes are provided for names where a possible close relationship with C. gloeosporioides sensu lato has been suggested in the recent literature, along with all subspecific taxa and formae speciales within C. gloeosporioides and its putative teleomorph Glomerella cingulata.\n\n\nTAXONOMIC NOVELTIES\nName replacement - C. queenslandicum B. Weir & P.R. Johnst. New species - C. aenigma B. Weir & P.R. Johnst., C. aeschynomenes B. Weir & P.R. Johnst., C. alatae B. Weir & P.R. Johnst., C. alienum B. Weir & P.R. Johnst, C. aotearoa B. Weir & P.R. Johnst., C. clidemiae B. Weir & P.R. Johnst., C. salsolae B. Weir & P.R. Johnst., C. ti B. Weir & P.R. Johnst. New subspecies - C. kahawae subsp. ciggaro B. Weir & P.R. Johnst. Typification: Epitypification - C. queenslandicum B. Weir & P.R. Johnst.",
"title": ""
},
{
"docid": "0cd2da131bf78526c890dae72514a8f0",
"text": "This paper presents a research model to explicate that the level of consumers’ participation on companies’ brand microblogs is influenced by their brand attachment process. That is, self-congruence and partner quality affect consumers’ trust and commitment toward companies’ brands, which in turn influence participation on brand microblogs. Further, we propose that gender has important moderating effects in our research model. We empirically test the research hypotheses through an online survey. The findings illustrate that self-congruence and partner quality have positive effects on trust and commitment. Trust affects commitment and participation, while participation is also influenced by commitment. More importantly, the effects of self-congruence on trust and commitment are found to be stronger for male consumers than females. In contrast, the effects of partner quality on trust and commitment are stronger for female consumers than males. Trust posits stronger effects on commitment and participation for males, while commitment has a stronger effect on participation for females. We believe that our findings contribute to the literature on consumer participation behavior and gender differences on brand microblogs. Companies can also apply our findings to strengthen their brand building and participation level of different consumers on their microblogs. 2014 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "4b28bc08ebeaf9be27ce642e622e064d",
"text": "Homogeneity analysis combines the idea of maximizing the correlations between variables of a multivariate data set with that of optimal scaling. In this article we present methodological and practical issues of the R package homals which performs homogeneity analysis and various extensions. By setting rank constraints nonlinear principal component analysis can be performed. The variables can be partitioned into sets such that homogeneity analysis is extended to nonlinear canonical correlation analysis or to predictive models which emulate discriminant analysis and regression models. For each model the scale level of the variables can be taken into account by setting level constraints. All algorithms allow for missing values.",
"title": ""
}
] | scidocsrr |
b2f87f4a0421f6a01a15ce452ee81fc3 | Dataset for forensic analysis of B-tree file system | [
{
"docid": "61953281f4b568ad15e1f62be9d68070",
"text": "Most of the effort in today’s digital forensics community lies in the retrieval and analysis of existing information from computing systems. Little is being done to increase the quantity and quality of the forensic information on today’s computing systems. In this paper we pose the question of what kind of information is desired on a system by a forensic investigator. We give an overview of the information that exists on current systems and discuss its shortcomings. We then examine the role that file system metadata plays in digital forensics and analyze what kind of information is desirable for different types of forensic investigations, how feasible it is to obtain it, and discuss issues about storing the information.",
"title": ""
}
] | [
{
"docid": "ff572d9c74252a70a48d4ba377f941ae",
"text": "This paper considers how design fictions in the form of 'imaginary abstracts' can be extended into complete 'fictional papers'. Imaginary abstracts are a type of design fiction that are usually included within the content of 'real' research papers, they comprise brief accounts of fictional problem frames, prototypes, user studies and findings. Design fiction abstracts have been proposed as a means to move beyond solutionism to explore the potential societal value and consequences of new HCI concepts. In this paper we contrast the properties of imaginary abstracts, with the properties of a published paper that presents fictional research, Game of Drones. Extending the notion of imaginary abstracts so that rather than including fictional abstracts within a 'non-fiction' research paper, Game of Drones is fiction from start to finish (except for the concluding paragraph where the fictional nature of the paper is revealed). In this paper we review the scope of design fiction in HCI research before contrasting the properties of imaginary abstracts with the properties of our example fictional research paper. We argue that there are clear merits and weaknesses to both approaches, but when used tactfully and carefully fictional research papers may further empower HCI's burgeoning design discourse with compelling new methods.",
"title": ""
},
{
"docid": "e668ffe258772aa5eb425cdfa5edb5ed",
"text": "A novel method of on-line 2,2′-Azinobis-(3-ethylbenzthiazoline-6-sulphonate)-Capillary Electrophoresis-Diode Array Detector (on-line ABTS+-CE-DAD) was developed to screen the major antioxidants from complex herbal medicines. ABTS+, one of well-known oxygen free radicals was firstly integrated into the capillary. For simultaneously detecting and separating ABTS+ and chemical components of herb medicines, some conditions were optimized. The on-line ABTS+-CE-DAD method has successfully been used to screen the main antioxidants from Shuxuening injection (SI), an herbal medicines injection. Under the optimum conditions, nine ingredients of SI including clitorin, rutin, isoquercitrin, Quercetin-3-O-D-glucosyl]-(1-2)-L-rhamnoside, kaempferol-3-O-rutinoside, kaempferol-7-O-β-D-glucopyranoside, apigenin-7-O-Glucoside, quercetin-3-O-[2-O-(6-O-p-hydroxyl-E-coumaroyl)-D-glucosyl]-(1-2)-L-rhamnoside, 3-O-{2-O-[6-O-(p-hydroxyl-E-coumaroyl)-glucosyl]}-(1-2) rhamnosyl kaempfero were separated and identified as the major antioxidants. There is a linear relationship between the total amount of major antioxidants and total antioxidative activity of SI with a linear correlation coefficient of 0.9456. All the Relative standard deviations of recovery, precision and stability were below 7.5%. Based on these results, these nine ingredients could be selected as combinatorial markers to evaluate quality control of SI. It was concluded that on-line ABTS+-CE-DAD method was a simple, reliable and powerful tool to screen and quantify active ingredients for evaluating quality of herbal medicines.",
"title": ""
},
{
"docid": "5404c00708c64d9f254c25f0065bc13c",
"text": "In this paper, we discuss the problem of automatic skin lesion analysis, specifically melanoma detection and semantic segmentation. We accomplish this by using deep learning techniques to perform classification on publicly available dermoscopic images. Skin cancer, of which melanoma is a type, is the most prevalent form of cancer in the US and more than four million cases are diagnosed in the US every year. In this work, we present our efforts towards an accessible, deep learning-based system that can be used for skin lesion classification, thus leading to an improved melanoma screening system. For classification, a deep convolutional neural network architecture is first implemented over the raw images. In addition, hand-coded features such as 166-D color histogram distribution, edge histogram and Multiscale Color local binary patterns are extracted from the images and presented to a random forest classifier. The average of the outputs from the two mentioned classifiers is taken as the final classification result. The classification task achieves an accuracy of 80.3%, AUC score of 0.69 and a precision score of 0.81. For segmentation, we implement a convolutional-deconvolutional architecture and the segmentation model achieves a Dice coefficient of 73.5%.",
"title": ""
},
{
"docid": "8e8c566d93f11bd96318978dd4b21ed1",
"text": "Recently, neural-network based word embedding models have been shown to produce high-quality distributional representations capturing both semantic and syntactic information. In this paper, we propose a grouping-based context predictive model by considering the interactions of context words, which generalizes the widely used CBOW model and Skip-Gram model. In particular, the words within a context window are split into several groups with a grouping function, where words in the same group are combined while different groups are treated as independent. To determine the grouping function, we propose a relatedness hypothesis stating the relationship among context words and propose several context grouping methods. Experimental results demonstrate better representations can be learned with suitable context groups.",
"title": ""
},
{
"docid": "fe1bc993047a95102f4331f57b1f9197",
"text": "Document classification tasks were primarily tackled at word level. Recent research that works with character-level inputs shows several benefits over word-level approaches such as natural incorporation of morphemes and better handling of rare words. We propose a neural network architecture that utilizes both convolution and recurrent layers to efficiently encode character inputs. We validate the proposed model on eight large scale document classification tasks and compare with character-level convolution-only models. It achieves comparable performances with much less parameters.",
"title": ""
},
{
"docid": "b6e5f04832ece23bf74e49a3dd191eef",
"text": "Integration of knowledge concerning circadian rhythms, metabolic networks, and sleep-wake cycles is imperative for unraveling the mysteries of biological cycles and their underlying mechanisms. During the last decade, enormous progress in circadian biology research has provided a plethora of new insights into the molecular architecture of circadian clocks. However, the recent identification of autonomous redox oscillations in cells has expanded our view of the clockwork beyond conventional transcription/translation feedback loop models, which have been dominant since the first circadian period mutants were identified in fruit fly. Consequently, non-transcriptional timekeeping mechanisms have been proposed, and the antioxidant peroxiredoxin proteins have been identified as conserved markers for 24-hour rhythms. Here, we review recent advances in our understanding of interdependencies amongst circadian rhythms, sleep homeostasis, redox cycles, and other cellular metabolic networks. We speculate that systems-level investigations implementing integrated multi-omics approaches could provide novel mechanistic insights into the connectivity between daily cycles and metabolic systems.",
"title": ""
},
{
"docid": "935ebaec03bd12c85731eb42abcd578e",
"text": "Utilization of polymers as biomaterials has greatly impacted the advancement of modern medicine. Specifically, polymeric biomaterials that are biodegradable provide the significant advantage of being able to be broken down and removed after they have served their function. Applications are wide ranging with degradable polymers being used clinically as surgical sutures and implants. In order to fit functional demand, materials with desired physical, chemical, biological, biomechanical and degradation properties must be selected. Fortunately, a wide range of natural and synthetic degradable polymers has been investigated for biomedical applications with novel materials constantly being developed to meet new challenges. This review summarizes the most recent advances in the field over the past 4 years, specifically highlighting new and interesting discoveries in tissue engineering and drug delivery applications.",
"title": ""
},
{
"docid": "9002cca44b21fb7923ae18ced55bbcc2",
"text": "Species extinctions pose serious threats to the functioning of ecological communities worldwide. We used two qualitative and quantitative pollination networks to simulate extinction patterns following three removal scenarios: random removal and systematic removal of the strongest and weakest interactors. We accounted for pollinator behaviour by including potential links into temporal snapshots (12 consecutive 2-week networks) to reflect mutualists' ability to 'switch' interaction partners (re-wiring). Qualitative data suggested a linear or slower than linear secondary extinction while quantitative data showed sigmoidal decline of plant interaction strength upon removal of the strongest interactor. Temporal snapshots indicated greater stability of re-wired networks over static systems. Tolerance of generalized networks to species extinctions was high in the random removal scenario, with an increase in network stability if species formed new interactions. Anthropogenic disturbance, however, that promote the extinction of the strongest interactors might induce a sudden collapse of pollination networks.",
"title": ""
},
{
"docid": "b67acf80642aa2ba8ba01c362303857c",
"text": "Storm has long served as the main platform for real-time analytics at Twitter. However, as the scale of data being processed in real-time at Twitter has increased, along with an increase in the diversity and the number of use cases, many limitations of Storm have become apparent. We need a system that scales better, has better debug-ability, has better performance, and is easier to manage -- all while working in a shared cluster infrastructure. We considered various alternatives to meet these needs, and in the end concluded that we needed to build a new real-time stream data processing system. This paper presents the design and implementation of this new system, called Heron. Heron is now the de facto stream data processing engine inside Twitter, and in this paper we also share our experiences from running Heron in production. In this paper, we also provide empirical evidence demonstrating the efficiency and scalability of Heron.",
"title": ""
},
{
"docid": "c12d534d219e3d249ba3da1c0956c540",
"text": "Within the research on Micro Aerial Vehicles (MAVs), the field on flight control and autonomous mission execution is one of the most active. A crucial point is the localization of the vehicle, which is especially difficult in unknown, GPS-denied environments. This paper presents a novel vision based approach, where the vehicle is localized using a downward looking monocular camera. A state-of-the-art visual SLAM algorithm tracks the pose of the camera, while, simultaneously, building an incremental map of the surrounding region. Based on this pose estimation a LQG/LTR based controller stabilizes the vehicle at a desired setpoint, making simple maneuvers possible like take-off, hovering, setpoint following or landing. Experimental data show that this approach efficiently controls a helicopter while navigating through an unknown and unstructured environment. To the best of our knowledge, this is the first work describing a micro aerial vehicle able to navigate through an unexplored environment (independently of any external aid like GPS or artificial beacons), which uses a single camera as only exteroceptive sensor.",
"title": ""
},
{
"docid": "933807e4458fb12ad45a3e951f53bb6d",
"text": "Zusammenfassung Es wird eine neuartige hybride Systemarchitektur für kontinuierliche Steuerungsund Regelungssysteme mit diskreten Entscheidungsfindungsprozessen vorgestellt. Die Funktionsweise wird beispielhaft für das hochautomatisierte Fahren auf Autobahnen und den Nothalteassistenten dargestellt. Da für einen zukünftigen Einsatz derartiger Systeme deren Robustheit entscheidend ist, wurde diese bei der Entwicklung des Ansatzes in den Mittelpunkt gestellt. Summary An innovative hybrid system structure for continuous control systems with discrete decisionmaking processes is presented. The functionality is demonstrated on a highly automated driving system on freeways and on the emergency stop assistant. Due to the fact that the robustness will be a determining factor for future usage of these systems, the presented structure focuses on this feature.",
"title": ""
},
{
"docid": "4a83c053ed9c17ed99262d926394ec83",
"text": "Multiangle social network recommendation algorithms (MSN) and a new assessment method, called similarity network evaluation (SNE), are both proposed. From the viewpoint of six dimensions, the MSN are classified into six algorithms, including user-based algorithm from resource point (UBR), user-based algorithm from tag point (UBT), resource-based algorithm from tag point (RBT), resource-based algorithm from user point (RBU), tag-based algorithm from resource point (TBR), and tag-based algorithm from user point (TBU). Compared with the traditional recall/precision (RP) method, the SNE is more simple, effective, and visualized. The simulation results show that TBR and UBR are the best algorithms, RBU and TBU are the worst ones, and UBT and RBT are in the medium levels.",
"title": ""
},
{
"docid": "af1257e27c0a6010a902e78dc8301df4",
"text": "A 20-MHz to 3-GHz wide-range multiphase delay-locked loop (DLL) has been realized in 90-nm CMOS technology. The proposed delay cell extends the operation frequency range. A scaling circuit is adopted to lower the large delay gain when the frequency of the input clock is low. The core area of this DLL is 0.005 mm2. The measured power consumption values are 0.4 and 3.6 mW for input clocks of 20 MHz and 3 GHz, respectively. The measured peak-to-peak and root-mean-square jitters are 2.3 and 16 ps at 3 GHz, respectively.",
"title": ""
},
{
"docid": "dca156a404916f2ab274406ad565e391",
"text": "Liang Zhou, member IEEE and YiFeng Wu, member IEEE Transphorm, Inc. 75 Castilian Dr., Goleta, CA, 93117 USA [email protected] Abstract: This paper presents a true bridgeless totem-pole Power-Factor-Correction (PFC) circuit using GaN HEMT. Enabled by a diode-free GaN power HEMT bridge with low reverse-recovery charge, very-high-efficiency single-phase AC-DC conversion is realized using a totem-pole topology without the limit of forward voltage drop from a fast diode. When implemented with a pair of sync-rec MOSFETs for line rectification, 99% efficiency is achieved at 230V ac input and 400 dc output in continuous-current mode.",
"title": ""
},
{
"docid": "2dd8b7004f45ae374a72e2c7d40b0892",
"text": "In this letter, a multifeed tightly coupled patch array antenna capable of broadband operation is analyzed and designed. First, an antenna array composed of infinite elements with each element excited by a feed is proposed. To produce specific polarized radiation efficiently, a new patch element is proposed, and its characteristics are studied based on a 2-port network model. Full-wave simulation results show that the infinite antenna array exhibits both a high efficiency and desirable radiation pattern in a wide frequency band (10 dB bandwidth) from 1.91 to 5.35 GHz (94.8%). Second, to validate its outstanding performance, a realistic finite 4 × 4 antenna prototype is designed, fabricated, and measured in our laboratory. The experimental results agree well with simulated ones, where the frequency bandwidth (VSWR < 2) is from 2.5 to 3.8 GHz (41.3%). The inherent compact size, light weight, broad bandwidth, and good radiation characteristics make this array antenna a promising candidate for future communication and advanced sensing systems.",
"title": ""
},
{
"docid": "21b07dc04d9d964346748eafe3bcfc24",
"text": "Online social data like user-generated content, expressed or implicit relations among people, and behavioral traces are at the core of many popular web applications and platforms, driving the research agenda of researchers in both academia and industry. The promises of social data are many, including the understanding of \"what the world thinks»» about a social issue, brand, product, celebrity, or other entity, as well as enabling better decision-making in a variety of fields including public policy, healthcare, and economics. However, many academics and practitioners are increasingly warning against the naive usage of social data. They highlight that there are biases and inaccuracies occurring at the source of the data, but also introduced during data processing pipeline; there are methodological limitations and pitfalls, as well as ethical boundaries and unexpected outcomes that are often overlooked. Such an overlook can lead to wrong or inappropriate results that can be consequential.",
"title": ""
},
{
"docid": "57c780448d8771a0d22c8ed147032a71",
"text": "“Social TV” is a term that broadly describes the online social interactions occurring between viewers while watching television. In this paper, we show that TV networks can derive value from social media content placed in shows because it leads to increased word of mouth via online posts, and it highly correlates with TV show related sales. In short, we show that TV event triggers change the online behavior of viewers. In this paper, we first show that using social media content on the televised American reality singing competition, The Voice, led to increased social media engagement during the TV broadcast. We then illustrate that social media buzz about a contestant after a performance is highly correlated with song sales from that contestant’s performance. We believe this to be the first study linking TV content to buzz and sales in real time.",
"title": ""
},
{
"docid": "e2f2961ab8c527914c3d23f8aa03e4bf",
"text": "Pedestrian detection based on the combination of convolutional neural network (CNN) and traditional handcrafted features (i.e., HOG+LUV) has achieved great success. In general, HOG+LUV are used to generate the candidate proposals and then CNN classifies these proposals. Despite its success, there is still room for improvement. For example, CNN classifies these proposals by the fully connected layer features, while proposal scores and the features in the inner-layers of CNN are ignored. In this paper, we propose a unifying framework called multi-layer channel features (MCF) to overcome the drawback. It first integrates HOG+LUV with each layer of CNN into a multi-layer image channels. Based on the multi-layer image channels, a multi-stage cascade AdaBoost is then learned. The weak classifiers in each stage of the multi-stage cascade are learned from the image channels of corresponding layer. Experiments on Caltech data set, INRIA data set, ETH data set, TUD-Brussels data set, and KITTI data set are conducted. With more abundant features, an MCF achieves the state of the art on Caltech pedestrian data set (i.e., 10.40% miss rate). Using new and accurate annotations, an MCF achieves 7.98% miss rate. As many non-pedestrian detection windows can be quickly rejected by the first few stages, it accelerates detection speed by 1.43 times. By eliminating the highly overlapped detection windows with lower scores after the first stage, it is 4.07 times faster than negligible performance loss.",
"title": ""
},
{
"docid": "9b17dd1fc2c7082fa8daecd850fab91c",
"text": "This paper presents all the stages of development of a solar tracker for a photovoltaic panel. The system was made with a microcontroller which was design as an embedded control. It has a data base of the angles of orientation horizontal axle, therefore it has no sensor inlet signal and it function as an open loop control system. Combined of above mention characteristics in one the tracker system is a new technique of the active type. It is also a rotational robot of 1 degree of freedom.",
"title": ""
}
] | scidocsrr |
f13aed0918913cda0bc7bd425da0422e | CAML: Fast Context Adaptation via Meta-Learning | [
{
"docid": "e28ab50c2d03402686cc9a465e1231e7",
"text": "Few-shot learning is challenging for learning algorithms that learn each task in isolation and from scratch. In contrast, meta-learning learns from many related tasks a meta-learner that can learn a new task more accurately and faster with fewer examples, where the choice of meta-learners is crucial. In this paper, we develop Meta-SGD, an SGD-like, easily trainable meta-learner that can initialize and adapt any differentiable learner in just one step, on both supervised learning and reinforcement learning. Compared to the popular meta-learner LSTM, Meta-SGD is conceptually simpler, easier to implement, and can be learned more efficiently. Compared to the latest meta-learner MAML, Meta-SGD has a much higher capacity by learning to learn not just the learner initialization, but also the learner update direction and learning rate, all in a single meta-learning process. Meta-SGD shows highly competitive performance for few-shot learning on regression, classification, and reinforcement learning.",
"title": ""
}
] | [
{
"docid": "e8cf458c60dc7b4a8f71df2fabf1558d",
"text": "We propose a vision-based method that localizes a ground vehicle using publicly available satellite imagery as the only prior knowledge of the environment. Our approach takes as input a sequence of ground-level images acquired by the vehicle as it navigates, and outputs an estimate of the vehicle's pose relative to a georeferenced satellite image. We overcome the significant viewpoint and appearance variations between the images through a neural multi-view model that learns location-discriminative embeddings in which ground-level images are matched with their corresponding satellite view of the scene. We use this learned function as an observation model in a filtering framework to maintain a distribution over the vehicle's pose. We evaluate our method on different benchmark datasets and demonstrate its ability localize ground-level images in environments novel relative to training, despite the challenges of significant viewpoint and appearance variations.",
"title": ""
},
{
"docid": "577e5f82a0a195b092d7a15df110bd96",
"text": "We propose a powerful new tool for conducting research on computational intelligence and games. `PyVGDL' is a simple, high-level description language for 2D video games, and the accompanying software library permits parsing and instantly playing those games. The streamlined design of the language is based on defining locations and dynamics for simple building blocks, and the interaction effects when such objects collide, all of which are provided in a rich ontology. It can be used to quickly design games, without needing to deal with control structures, and the concise language is also accessible to generative approaches. We show how the dynamics of many classical games can be generated from a few lines of PyVGDL. The main objective of these generated games is to serve as diverse benchmark problems for learning and planning algorithms; so we provide a collection of interfaces for different types of learning agents, with visual or abstract observations, from a global or first-person viewpoint. To demonstrate the library's usefulness in a broad range of learning scenarios, we show how to learn competent behaviors when a model of the game dynamics is available or when it is not, when full state information is given to the agent or just subjective observations, when learning is interactive or in batch-mode, and for a number of different learning algorithms, including reinforcement learning and evolutionary search.",
"title": ""
},
{
"docid": "39d6a07bc7065499eb4cb0d8adb8338a",
"text": "This paper proposes a DNS Name Autoconfiguration (called DNSNA) for not only the global DNS names, but also the local DNS names of Internet of Things (IoT) devices. Since there exist so many devices in the IoT environments, it is inefficient to manually configure the Domain Name System (DNS) names of such IoT devices. By this scheme, the DNS names of IoT devices can be autoconfigured with the device's category and model in IPv6-based IoT environments. This DNS name lets user easily identify each IoT device for monitoring and remote-controlling in IoT environments. In the procedure to generate and register an IoT device's DNS name, the standard protocols of Internet Engineering Task Force (IETF) are used. Since the proposed scheme resolves an IoT device's DNS name into an IPv6 address in unicast through an authoritative DNS server, it generates less traffic than Multicast DNS (mDNS), which is a legacy DNS application for the DNS name service in IoT environments. Thus, the proposed scheme is more appropriate in global IoT networks than mDNS. This paper explains the design of the proposed scheme and its service scenario, such as smart road and smart home. The results of the simulation prove that our proposal outperforms the legacy scheme in terms of energy consumption.",
"title": ""
},
{
"docid": "2e89bc59f85b14cf40a868399a3ce351",
"text": "CONTEXT\nYouth worldwide play violent video games many hours per week. Previous research suggests that such exposure can increase physical aggression.\n\n\nOBJECTIVE\nWe tested whether high exposure to violent video games increases physical aggression over time in both high- (United States) and low- (Japan) violence cultures. We hypothesized that the amount of exposure to violent video games early in a school year would predict changes in physical aggressiveness assessed later in the school year, even after statistically controlling for gender and previous physical aggressiveness.\n\n\nDESIGN\nIn 3 independent samples, participants' video game habits and physically aggressive behavior tendencies were assessed at 2 points in time, separated by 3 to 6 months.\n\n\nPARTICIPANTS\nOne sample consisted of 181 Japanese junior high students ranging in age from 12 to 15 years. A second Japanese sample consisted of 1050 students ranging in age from 13 to 18 years. The third sample consisted of 364 United States 3rd-, 4th-, and 5th-graders ranging in age from 9 to 12 years. RESULTS. Habitual violent video game play early in the school year predicted later aggression, even after controlling for gender and previous aggressiveness in each sample. Those who played a lot of violent video games became relatively more physically aggressive. Multisample structure equation modeling revealed that this longitudinal effect was of a similar magnitude in the United States and Japan for similar-aged youth and was smaller (but still significant) in the sample that included older youth.\n\n\nCONCLUSIONS\nThese longitudinal results confirm earlier experimental and cross-sectional studies that had suggested that playing violent video games is a significant risk factor for later physically aggressive behavior and that this violent video game effect on youth generalizes across very different cultures. As a whole, the research strongly suggests reducing the exposure of youth to this risk factor.",
"title": ""
},
{
"docid": "6981598efd4a70f669b5abdca47b7ea1",
"text": "The in-flight alignment is a critical stage for airborne inertial navigation system/Global Positioning System (INS/GPS) applications. The alignment task is usually carried out by the Kalman filtering technique that necessitates a good initial attitude to obtain a satisfying performance. Due to the airborne dynamics, the in-flight alignment is much more difficult than the alignment on the ground. An optimization-based coarse alignment approach that uses GPS position/velocity as input, founded on the newly-derived velocity/position integration formulae is proposed. Simulation and flight test results show that, with the GPS lever arm well handled, it is potentially able to yield the initial heading up to 1 deg accuracy in 10 s. It can serve as a nice coarse in-flight alignment without any prior attitude information for the subsequent fine Kalman alignment. The approach can also be applied to other applications that require aligning the INS on the run.",
"title": ""
},
{
"docid": "05b4df16c35a89ee2a5b9ac482e0a297",
"text": "Intensity-based classification of MR images has proven problematic, even when advanced techniques are used. Intrascan and interscan intensity inhomogeneities are a common source of difficulty. While reported methods have had some success in correcting intrascan inhomogeneities, such methods require supervision for the individual scan. This paper describes a new method called adaptive segmentation that uses knowledge of tissue intensity properties and intensity inhomogeneities to correct and segment MR images. Use of the expectation-maximization (EM) algorithm leads to a method that allows for more accurate segmentation of tissue types as well as better visualization of magnetic resonance imaging (MRI) data, that has proven to be effective in a study that includes more than 1000 brain scans. Implementation and results are described for segmenting the brain in the following types of images: axial (dual-echo spin-echo), coronal [three dimensional Fourier transform (3-DFT) gradient-echo T1-weighted] all using a conventional head coil, and a sagittal section acquired using a surface coil. The accuracy of adaptive segmentation was found to be comparable with manual segmentation, and closer to manual segmentation than supervised multivariant classification while segmenting gray and white matter.",
"title": ""
},
{
"docid": "83224037f402a44cf7f819acbb91d69f",
"text": "Chinese word segmentation (CWS) is an important task for Chinese NLP. Recently, many neural network based methods have been proposed for CWS. However, these methods require a large number of labeled sentences for model training, and usually cannot utilize the useful information in Chinese dictionary. In this paper, we propose two methods to exploit the dictionary information for CWS. The first one is based on pseudo labeled data generation, and the second one is based on multi-task learning. The experimental results on two benchmark datasets validate that our approach can effectively improve the performance of Chinese word segmentation, especially when training data is insufficient.",
"title": ""
},
{
"docid": "7fdc12cbaa29b1f59d2a850a348317b7",
"text": "Arhinia is a rare condition characterised by the congenital absence of nasal structures, with different patterns of presentation, and often associated with other craniofacial or somatic anomalies. To date, about 30 surviving cases have been reported. We report the case of a female patient aged 6 years, who underwent internal and external nose reconstruction using a staged procedure: a nasal airway was obtained through maxillary osteotomy and ostectomy, and lined with a local skin flap and split-thickness skin grafts; then the external nose was reconstructed with an expanded frontal flap, armed with an autogenous rib framework.",
"title": ""
},
{
"docid": "c10829be320a9be6ecbc9ca751e8b56e",
"text": "This article analyzes two decades of research regarding the mass media's role in shaping, perpetuating, and reducing the stigma of mental illness. It concentrates on three broad areas common in media inquiry: production, representation, and audiences. The analysis reveals that descriptions of mental illness and the mentally ill are distorted due to inaccuracies, exaggerations, or misinformation. The ill are presented not only as peculiar and different, but also as dangerous. Thus, the media perpetuate misconceptions and stigma. Especially prominent is the absence of agreed-upon definitions of \"mental illness,\" as well as the lack of research on the inter-relationships in audience studies between portrayals in the media and social perceptions. The analysis concludes with suggestions for further research on mass media's inter-relationships with mental illness.",
"title": ""
},
{
"docid": "6ef52ad99498d944e9479252d22be9c8",
"text": "The problem of detecting rectangular structures in images arises in many applications, from building extraction in aerial images to particle detection in cryo-electron microscopy. This paper proposes a new technique for rectangle detection using a windowed Hough transform. Every pixel of the image is scanned, and a sliding window is used to compute the Hough transform of small regions of the image. Peaks of the Hough image (which correspond to line segments) are then extracted, and a rectangle is detected when four extracted peaks satisfy certain geometric conditions. Experimental results indicate that the proposed technique produced promising results for both synthetic and natural images.",
"title": ""
},
{
"docid": "0c891acac99279cff995a7471ea9aaff",
"text": "The mainstay of diagnosis for Treponema pallidum infections is based on nontreponemal and treponemal serologic tests. Many new diagnostic methods for syphilis have been developed, using specific treponemal antigens and novel formats, including rapid point-of-care tests, enzyme immunoassays, and chemiluminescence assays. Although most of these newer tests are not yet cleared for use in the United States by the Food and Drug Administration, their performance and ease of automation have promoted their application for syphilis screening. Both sensitive and specific, new screening tests detect antitreponemal IgM and IgG antibodies by use of wild-type or recombinant T. pallidum antigens. However, these tests cannot distinguish between recent and remote or treated versus untreated infections. In addition, the screening tests require confirmation with nontreponemal tests. This use of treponemal tests for screening and nontreponemal serologic tests as confirmatory tests is a reversal of long-held practice. Clinicians need to understand the science behind these tests to use them properly in syphilis management.",
"title": ""
},
{
"docid": "34a21bf5241d8cc3a7a83e78f8e37c96",
"text": "A current-biased voltage-programmed (CBVP) pixel circuit for active-matrix organic light-emitting diode (AMOLED) displays is proposed. The pixel circuit can not only ensure an accurate and fast compensation for the threshold voltage variation and degeneration of the driving TFT and the OLED, but also provide the OLED with a negative bias during the programming period. The negative bias prevents the OLED from a possible light emitting during the programming period and potentially suppresses the degradation of the OLED.",
"title": ""
},
{
"docid": "a0ff157e543d7944a4a83c95dd0da7b3",
"text": "This paper provides a review on some of the significant research work done on abstractive text summarization. The process of generating the summary from one or more text corpus, by keeping the key points in the corpus is called text summarization. The most prominent technique in text summarization is an abstractive and extractive method. The extractive summarization is purely based on the algorithm and it just copies the most relevant sentence/words from the input text corpus and creating the summary. An abstractive method generates new sentences/words that may/may not be in the input corpus. This paper focuses on the abstractive text summarization. This paper explains the overview of the various processes in abstractive text summarization. It includes data processing, word embedding, basic model architecture, training, and validation process and the paper narrates the current research in this field. It includes different types of architectures, attention mechanism, supervised and reinforcement learning, the pros and cons of different architecture. Systematic comparison of different text summarization models will provide the future direction of text summarization.",
"title": ""
},
{
"docid": "4318041c3cf82ce72da5983f20c6d6c4",
"text": "In line with cloud computing emergence as the dominant enterprise computing paradigm, our conceptualization of the cloud computing reference architecture and service construction has also evolved. For example, to address the need for cost reduction and rapid provisioning, virtualization has moved beyond hardware to containers. More recently, serverless computing or Function-as-a-Service has been presented as a means to introduce further cost-efficiencies, reduce configuration and management overheads, and rapidly increase an application's ability to speed up, scale up and scale down in the cloud. The potential of this new computation model is reflected in the introduction of serverless computing platforms by the main hyperscale cloud service providers. This paper provides an overview and multi-level feature analysis of seven enterprise serverless computing platforms. It reviews extant research on these platforms and identifies the emergence of AWS Lambda as a de facto base platform for research on enterprise serverless cloud computing. The paper concludes with a summary of avenues for further research.",
"title": ""
},
{
"docid": "5691ca09e609aea46b9fd5e7a83d165a",
"text": "View-based 3-D object retrieval and recognition has become popular in practice, e.g., in computer aided design. It is difficult to precisely estimate the distance between two objects represented by multiple views. Thus, current view-based 3-D object retrieval and recognition methods may not perform well. In this paper, we propose a hypergraph analysis approach to address this problem by avoiding the estimation of the distance between objects. In particular, we construct multiple hypergraphs for a set of 3-D objects based on their 2-D views. In these hypergraphs, each vertex is an object, and each edge is a cluster of views. Therefore, an edge connects multiple vertices. We define the weight of each edge based on the similarities between any two views within the cluster. Retrieval and recognition are performed based on the hypergraphs. Therefore, our method can explore the higher order relationship among objects and does not use the distance between objects. We conduct experiments on the National Taiwan University 3-D model dataset and the ETH 3-D object collection. Experimental results demonstrate the effectiveness of the proposed method by comparing with the state-of-the-art methods.",
"title": ""
},
{
"docid": "370b416dd51cfc08dc9b97f87c500eba",
"text": "Apollonian circle packings arise by repeatedly filling the interstices between mutually tangent circles with further tangent circles. It is possible for every circle in such a packing to have integer radius of curvature, and we call such a packing an integral Apollonian circle packing. This paper studies number-theoretic properties of the set of integer curvatures appearing in such packings. Each Descartes quadruple of four tangent circles in the packing gives an integer solution to the Descartes equation, which relates the radii of curvature of four mutually tangent circles: x þ y þ z þ w 1⁄4 1 2 ðx þ y þ z þ wÞ: Each integral Apollonian circle packing is classified by a certain root quadruple of integers that satisfies the Descartes equation, and that corresponds to a particular quadruple of circles appearing in the packing. We express the number of root quadruples with fixed minimal element n as a class number, and give an exact formula for it. We study which integers occur in a given integer packing, and determine congruence restrictions which sometimes apply. We present evidence suggesting that the set of integer radii of curvatures that appear in an integral Apollonian circle packing has positive density, and in fact represents all sufficiently large integers not excluded by Corresponding author. E-mail addresses: [email protected] (R.L. Graham), [email protected] (J.C. Lagarias), colinm@ research.avayalabs.com (C.L. Mallows), [email protected] (A.R. Wilks), catherine.yan@math. tamu.edu (C.H. Yan). 1 Current address: Department of Computer Science, University of California at San Diego, La Jolla, CA 92093, USA. 2 Work partly done during a visit to the Institute for Advanced Study. 3 Current address: Avaya Labs, Basking Ridge, NJ 07920, USA. 0022-314X/03/$ see front matter r 2003 Elsevier Science (USA). All rights reserved. doi:10.1016/S0022-314X(03)00015-5 congruence conditions. Finally, we discuss asymptotic properties of the set of curvatures obtained as the packing is recursively constructed from a root quadruple. r 2003 Elsevier Science (USA). All rights reserved.",
"title": ""
},
{
"docid": "5988ef7f9c5b8dd125c78c39f26d5a70",
"text": "Diagnosis Related Group (DRG) upcoding is an anomaly in healthcare data that costs hundreds of millions of dollars in many developed countries. DRG upcoding is typically detected through resource intensive auditing. As supervised modeling of DRG upcoding is severely constrained by scope and timeliness of past audit data, we propose in this paper an unsupervised algorithm to filter data for potential identification of DRG upcoding. The algorithm has been applied to a hip replacement/revision dataset and a heart-attack dataset. The results are consistent with the assumptions held by domain experts.",
"title": ""
},
{
"docid": "e4b02298a2ff6361c0a914250f956911",
"text": "This paper studies efficient means in dealing with intracategory diversity in object detection. Strategies for occlusion and orientation handling are explored by learning an ensemble of detection models from visual and geometrical clusters of object instances. An AdaBoost detection scheme is employed with pixel lookup features for fast detection. The analysis provides insight into the design of a robust vehicle detection system, showing promise in terms of detection performance and orientation estimation accuracy.",
"title": ""
},
{
"docid": "16eff9f2b7626f53baa95463f18d518a",
"text": "The need for fine-grained power management in digital ICs has led to the design and implementation of compact, scalable low-drop out regulators (LDOs) embedded deep within logic blocks. While analog LDOs have traditionally been used in digital ICs, the need for digitally implementable LDOs embedded in digital functional units for ultrafine grained power management is paramount. This paper presents a fully-digital, phase locked LDO implemented in 32 nm CMOS. The control model of the proposed design has been provided and limits of stability have been shown. Measurement results with a resistive load as well as a digital load exhibit peak current efficiency of 98%.",
"title": ""
},
{
"docid": "0e8efa2e84888547a1a4502883316a7a",
"text": "Conservation and sustainable management of wetlands requires participation of local stakeholders, including communities. The Bigodi Wetland is unusual because it is situated in a common property landscape but the local community has been running a successful community-based natural resource management programme (CBNRM) for the wetland for over a decade. Whilst external visitors to the wetland provide ecotourism revenues we sought to quantify community benefits through the use of wetland goods such as firewood, plant fibres, and the like, and costs associated with wild animals damaging farming activities. We interviewed 68 households living close to the wetland and valued their cash and non-cash incomes from farming and collection of non-timber forest products (NTFPs) and water. The majority of households collected a wide variety of plant and fish resources and water from the wetland for household use and livestock. Overall, 53% of total household cash and non-cash income was from collected products, mostly the wetland, 28% from arable agriculture, 12% from livestock and 7% from employment and cash transfers. Female-headed households had lower incomes than male-headed ones, and with a greater reliance on NTFPs. Annual losses due to wildlife damage were estimated at 4.2% of total gross income. Most respondents felt that the wetland was important for their livelihoods, with more than 80% identifying health, education, craft materials and firewood as key benefits. Ninety-five percent felt that the wetland was in a good condition and that most residents observed the agreed CBNRM rules regarding use of the wetland. This study confirms the success of the locally run CBNRM processes underlying the significant role that the wetland plays in local livelihoods.",
"title": ""
}
] | scidocsrr |
c95113263d1ab33b8fa34bfec122bcff | CoBoLD — A bonding mechanism for modular self-reconfigurable mobile robots | [
{
"docid": "9055008e0c6837b6c9b494922eb0770a",
"text": "One of the primary impediments to building ensembles of modular robots is the complexity and number of mechanical mechanisms used to construct the individual modules. As part of the Claytronics project - which aims to build very large ensembles of modular robots - we investigate how to simplify each module by eliminating moving parts and reducing the number of mechanical mechanisms on each robot by using force-at-a-distance actuators. Additionally, we are also investigating the feasibility of using these unary actuators to improve docking performance, implement intermodule adhesion, power transfer, communication, and sensing. In this paper we describe our most recent results in the magnetic domain, including our first design sufficiently robust to operate reliably in groups greater than two modules. Our work should be seen as an extension of systems such as Fracta [9], and a contrasting line of inquiry to several other researchers' prior efforts that have used magnetic latching to attach modules to one another but relied upon a powered hinge [10] or telescoping mechanism [12] within each module to facilitate self-reconfiguration.",
"title": ""
},
{
"docid": "6befac01d5a3f21100a54de43ee62845",
"text": "Robots used for tasks in space have strict requirements. Modular reconfigurable robots have a variety of attributes that are advantageous for these conditions including the ability to serve as many tools at once saving weight, packing into compressed forms saving space and having large redundancy to increase robustness. Self-reconfigurable systems can also self-repair as well as automatically adapt to changing conditions or ones that were not anticipated. PolyBot may serve well in the space manipulation and surface mobility class of space applications.",
"title": ""
}
] | [
{
"docid": "1cd77d97f27b45d903ffcecda02795a5",
"text": "Molecular machine learning has been maturing rapidly over the last few years. Improved methods and the presence of larger datasets have enabled machine learning algorithms to make increasingly accurate predictions about molecular properties. However, algorithmic progress has been limited due to the lack of a standard benchmark to compare the efficacy of proposed methods; most new algorithms are benchmarked on different datasets making it challenging to gauge the quality of proposed methods. This work introduces MoleculeNet, a large scale benchmark for molecular machine learning. MoleculeNet curates multiple public datasets, establishes metrics for evaluation, and offers high quality open-source implementations of multiple previously proposed molecular featurization and learning algorithms (released as part of the DeepChem open source library). MoleculeNet benchmarks demonstrate that learnable representations are powerful tools for molecular machine learning and broadly offer the best performance. However, this result comes with caveats. Learnable representations still struggle to deal with complex tasks under data scarcity and highly imbalanced classification. For quantum mechanical and biophysical datasets, the use of physics-aware featurizations can be more important than choice of particular learning algorithm.",
"title": ""
},
{
"docid": "58317baa129fd1f164813dcaf566b543",
"text": "Affective image understanding has been extensively studied in the last decade since more and more users express emotion via visual contents. While current algorithms based on convolutional neural networks aim to distinguish emotional categories in a discrete label space, the task is inherently ambiguous. This is mainly because emotional labels with the same polarity (i.e., positive or negative) are highly related, which is different from concrete object concepts such as cat, dog and bird. To the best of our knowledge, few methods focus on leveraging such characteristic of emotions for affective image understanding. In this work, we address the problem of understanding affective images via deep metric learning and propose a multi-task deep framework to optimize both retrieval and classification goals. We propose the sentiment constraints adapted from the triplet constraints, which are able to explore the hierarchical relation of emotion labels. We further exploit the sentiment vector as an effective representation to distinguish affective images utilizing the texture representation derived from convolutional layers. Extensive evaluations on four widely-used affective datasets, i.e., Flickr and Instagram, IAPSa, Art Photo, and Abstract Paintings, demonstrate that the proposed algorithm performs favorably against the state-of-the-art methods on both affective image retrieval and classification tasks.",
"title": ""
},
{
"docid": "6fe39cbe3811ac92527ba60620b39170",
"text": "Providing accurate information about human's state, activity is one of the most important elements in Ubiquitous Computing. Various applications can be enabled if one's state, activity can be recognized. Due to the low deployment cost, non-intrusive sensing nature, Wi-Fi based activity recognition has become a promising, emerging research area. In this paper, we survey the state-of-the-art of the area from four aspects ranging from historical overview, theories, models, key techniques to applications. In addition to the summary about the principles, achievements of existing work, we also highlight some open issues, research directions in this emerging area.",
"title": ""
},
{
"docid": "5570d8a799dfffa220e5d81a03468a45",
"text": "Several results appeared that show significant reduction in time for matrix multiplication, singular value decomposition as well as linear (lscr2) regression, all based on data dependent random sampling. Our key idea is that low dimensional embeddings can be used to eliminate data dependence and provide more versatile, linear time pass efficient matrix computation. Our main contribution is summarized as follows. 1) Independent of the results of Har-Peled and of Deshpande and Vempala, one of the first - and to the best of our knowledge the most efficient - relative error (1 + epsi) parA $AkparF approximation algorithms for the singular value decomposition of an m times n matrix A with M non-zero entries that requires 2 passes over the data and runs in time O((M(k/epsi+k log k) + (n+m)(k/epsi+k log k)2)log (1/sigma)). 2) The first o(nd2) time (1 + epsi) relative error approximation algorithm for n times d linear (lscr2) regression. 3) A matrix multiplication and norm approximation algorithm that easily applies to implicitly given matrices and can be used as a black box probability boosting tool",
"title": ""
},
{
"docid": "39ccad7a2c779e277194e958820b82ad",
"text": "Smart cities are struggling with using public space efficiently and decreasing pollution at the same time. For this governments have embraced smart parking initiatives, which should result in a high utilization of public space and minimization of the driving, in this way reducing the emissions of cars. Yet, simply opening data about the availability of public spaces results in more congestions as multiple cars might be heading for the same parking space. In this work, we propose a Multiple Criteria based Parking space Reservation (MCPR) algorithm, for reserving a space for a user to deal with parking space in a fair way. Users' requirements are the main driving factor for the algorithm and used as criteria in MCPR. To evaluate the algorithm, simulations for three set of user preferences were made. The simulation results show that the algorithm satisfied the users' request fairly for all the three preferences. The algorithm helps users automatically to find a parking space according to the users' requirements. The algorithm can be used in a smart parking system to search for a parking space on behalf of user and send parking space information to the user.",
"title": ""
},
{
"docid": "3012eafa396cc27e8b05fd71dd9bc13b",
"text": "An assessment of Herman and Chomsky’s 1988 five-filter propaganda model suggests it is mainly valuable for identifying areas in which researchers should look for evidence of collaboration (whether intentional or otherwise) between mainstream media and the propaganda aims of the ruling establishment. The model does not identify methodologies for determining the relative weight of independent filters in different contexts, something that would be useful in its future development. There is a lack of precision in the characterization of some of the filters. The model privileges the structural factors that determine propagandized news selection, and therefore eschews or marginalizes intentionality. This paper extends the model to include the “buying out” of journalists or their publications by intelligence and related special interest organizations. It applies the extended six-filter model to controversies over reporting by The New York Times of the build-up towards the US invasion of Iraq in 2003, the issue of weapons of mass destruction in general, and the reporting of The New York Times correspondent Judith Miller in particular, in the context of broader critiques of US mainstream media war coverage. The controversies helped elicit evidence of the operation of some filters of the propaganda model, including dependence on official sources, fear of flak, and ideological convergence. The paper finds that the filter of routine news operations needs to be counterbalanced by its opposite, namely non-routine abuses of standard operating procedures. While evidence of the operation of other filters was weaker, this is likely due to difficulties of observability, as there are powerful deductive reasons for maintaining all six filters within the framework of media propaganda analysis.",
"title": ""
},
{
"docid": "cb6d60c4948bcf2381cb03a0e7dc8312",
"text": "While humor has been historically studied from a psychological, cognitive and linguistic standpoint, its study from a computational perspective is an area yet to be explored in Computational Linguistics. There exist some previous works, but a characterization of humor that allows its automatic recognition and generation is far from being specified. In this work we build a crowdsourced corpus of labeled tweets, annotated according to its humor value, letting the annotators subjectively decide which are humorous. A humor classifier for Spanish tweets is assembled based on supervised learning, reaching a precision of 84% and a recall of 69%.",
"title": ""
},
{
"docid": "a7181a3ddebed92d352ecf67e76c6e81",
"text": "Empirical, hypothesis-driven, experimentation is at the heart of the scientific discovery process and has become commonplace in human-factors related fields. To enable the integration of visual analytics in such experiments, we introduce VEEVVIE, the Visual Explorer for Empirical Visualization, VR and Interaction Experiments. VEEVVIE is comprised of a back-end ontology which can model several experimental designs encountered in these fields. This formalization allows VEEVVIE to capture experimental data in a query-able form and makes it accessible through a front-end interface. This front-end offers several multi-dimensional visualization widgets with built-in filtering and highlighting functionality. VEEVVIE is also expandable to support custom experimental measurements and data types through a plug-in visualization widget architecture. We demonstrate VEEVVIE through several case studies of visual analysis, performed on the design and data collected during an experiment on the scalability of high-resolution, immersive, tiled-display walls.",
"title": ""
},
{
"docid": "ad950cf335913941803a7af7cba969d3",
"text": "Storage systems rely on maintenance tasks, such as backup and layout optimization, to ensure data availability and good performance. These tasks access large amounts of data and can significantly impact foreground applications. We argue that storage maintenance can be performed more efficiently by prioritizing processing of data that is currently cached in memory. Data can be cached either due to other maintenance tasks requesting it previously, or due to overlapping foreground I/O activity.\n We present Duet, a framework that provides notifications about page-level events to maintenance tasks, such as a page being added or modified in memory. Tasks use these events as hints to opportunistically process cached data. We show that tasks using Duet can complete maintenance work more efficiently because they perform fewer I/O operations. The I/O reduction depends on the amount of data overlap with other maintenance tasks and foreground applications. Consequently, Duet's efficiency increases with additional tasks because opportunities for synergy appear more often.",
"title": ""
},
{
"docid": "c508f62dfd94d3205c71334638790c54",
"text": "Financial and capital markets (especially stock markets) are considered high return investment fields, which in the same time are dominated by uncertainty and volatility. Stock market prediction tries to reduce this uncertainty and consequently the risk. As stock markets are influenced by many economical, political and even psychological factors, it is very difficult to forecast the movement of future values. Since classical statistical methods (primarily technical and fundamental analysis) are unable to deal with the non-linearity in the dataset, thus it became necessary the utilization of more advanced forecasting procedures. Financial prediction is a research active area and neural networks have been proposed as one of the most promising methods for such predictions. Artificial Neural Networks (ANNs) mimics, simulates the learning capability of the human brain. NNs are able to find accurate solutions in a complex, noisy environment or even to deal efficiently with partial information. In the last decade the ANNs have been widely used for predicting financial markets, because they are capable to detect and reproduce linear and nonlinear relationships among a set of variables. Furthermore they have a potential of learning the underlying mechanics of stock markets, i.e. to capture the complex dynamics and non-linearity of the stock market time series. In this paper, study we will get acquainted with some financial time series analysis concepts and theories linked to stock markets, as well as with the neural networks based systems and hybrid techniques that were used to solve several forecasting problems concerning the capital, financial and stock markets. Putting the foregoing experimental results to use, we will develop, implement a multilayer feedforward neural network based financial time series forecasting system. Thus, this system will be used to predict the future index values of major US and European stock exchanges and the evolution of interest rates as well as the future stock price of some US mammoth companies (primarily from IT branch).",
"title": ""
},
{
"docid": "26439bd538c8f0b5d6fba3140e609aab",
"text": "A planar antenna with a broadband feeding structure is presented and analyzed for ultrawideband applications. The proposed antenna consists of a suspended radiator fed by an n-shape microstrip feed. Study shows that this antenna achieves an impedance bandwidth from 3.1-5.1 GHz (48%) for a reflection of coefficient of iotaS11iota < -10 dB, and an average gain of 7.7 dBi. Stable boresight radiation patterns are achieved across the entire operating frequency band, by suppressing the high order mode resonances. This design exhibits good mechanical tolerance and manufacturability.",
"title": ""
},
{
"docid": "25b183ce7ecc4b9203686c7ea68aacea",
"text": "A fundamental, and still largely unanswered, question in the context of Generative Adversarial Networks (GANs) is whether GANs are actually able to capture the key characteristics of the datasets they are trained on. The current approaches to examining this issue require significant human supervision, such as visual inspection of sampled images, and often offer only fairly limited scalability. In this paper, we propose new techniques that employ a classification–based perspective to evaluate synthetic GAN distributions and their capability to accurately reflect the essential properties of the training data. These techniques require only minimal human supervision and can easily be scaled and adapted to evaluate a variety of state-of-the-art GANs on large, popular datasets. Our analysis indicates that GANs have significant problems in reproducing the more distributional properties of the training dataset. In particular, when seen through the lens of classification, the diversity of GAN data is orders of magnitude less than that of the original data.",
"title": ""
},
{
"docid": "813a0d47405d133263deba0da6da27a8",
"text": "The demands on dielectric material measurements have increased over the years as electrical components have been miniaturized and device frequency bands have increased. Well-characterized dielectric measurements on thin materials are needed for circuit design, minimization of crosstalk, and characterization of signal-propagation speed. Bulk material applications have also increased. For accurate dielectric measurements, each measurement band and material geometry requires specific fixtures. Engineers and researchers must carefully match their material system and uncertainty requirements to the best available measurement system. Broadband measurements require transmission-line methods, and accurate measurements on low-loss materials are performed in resonators. The development of the most accurate methods for each application requires accurate fixture selection in terms of field geometry, accurate field models, and precise measurement apparatus.",
"title": ""
},
{
"docid": "40649a3bc0ea3ac37ed99dca22e52b92",
"text": "This paper presents a 40 Gb/s serial-link receiver including an adaptive equalizer and a CDR circuit. A parallel-path equalizing filter is used to compensate the high-frequency loss in copper cables. The adaptation is performed by only varying the gain in the high-pass path, which allows a single loop for proper control and completely removes the RC filters used for separately extracting the high- and low-frequency contents of the signal. A full-rate bang-bang phase detector with only five latches is proposed in the following CDR circuit. Minimizing the number of latches saves the power consumption and the area occupied by inductors. The performance is also improved by avoiding complicated routing of high-frequency signals. The receiver is able to recover 40 Gb/s data passing through a 4 m cable with 10 dB loss at 20 GHz. For an input PRBS of 2 7-1, the recovered clock jitter is 0.3 psrms and 4.3 pspp. The retimed data exhibits 500 mV pp output swing and 9.6 pspp jitter with BER <10-12. Fabricated in 90 nm CMOS technology, the receiver consumes 115 mW , of which 58 mW is dissipated in the equalizer and 57 mW in the CDR.",
"title": ""
},
{
"docid": "e0b1056544c3dc5c3b6f5bc072a72831",
"text": "In recent years, unfolding iterative algorithms as neural networks has become an empirical success in solving sparse recovery problems. However, its theoretical understanding is still immature, which prevents us from fully utilizing the power of neural networks. In this work, we study unfolded ISTA (Iterative Shrinkage Thresholding Algorithm) for sparse signal recovery. We introduce a weight structure that is necessary for asymptotic convergence to the true sparse signal. With this structure, unfolded ISTA can attain a linear convergence, which is better than the sublinear convergence of ISTA/FISTA in general cases. Furthermore, we propose to incorporate thresholding in the network to perform support selection, which is easy to implement and able to boost the convergence rate both theoretically and empirically. Extensive simulations, including sparse vector recovery and a compressive sensing experiment on real image data, corroborate our theoretical results and demonstrate their practical usefulness. We have made our codes publicly available.2.",
"title": ""
},
{
"docid": "37c8fa72d0959a64460dbbe4fdb8c296",
"text": "This paper presents a model which generates architectural layout for a single flat having regular shaped spaces; Bedroom, Bathroom, Kitchen, Balcony, Living and Dining Room. Using constraints at two levels; Topological (Adjacency, Compactness, Vaastu, and Open and Closed face constraints) and Dimensional (Length to Width ratio constraint), Genetic Algorithms have been used to generate the topological arrangement of spaces in the layout and further if required, feasibility have been dimensionally analyzed. Further easy evacuation form the selected layout in case of adversity has been proposed using Dijkstra's Algorithm. Later the proposed model has been tested for efficiency using various test cases. This paper also presents a classification and categorization of various problems of space planning.",
"title": ""
},
{
"docid": "885b7e9fb662d938fc8264597fa070b8",
"text": "Learning word embeddings on large unlabeled corpus has been shown to be successful in improving many natural language tasks. The most efficient and popular approaches learn or retrofit such representations using additional external data. Resulting embeddings are generally better than their corpus-only counterparts, although such resources cover a fraction of words in the vocabulary. In this paper, we propose a new approach, Dict2vec, based on one of the largest yet refined datasource for describing words – natural language dictionaries. Dict2vec builds new word pairs from dictionary entries so that semantically-related words are moved closer, and negative sampling filters out pairs whose words are unrelated in dictionaries. We evaluate the word representations obtained using Dict2vec on eleven datasets for the word similarity task and on four datasets for a text classification task.",
"title": ""
},
{
"docid": "95410e1bfb8a5f42ff949d061b1cd4b9",
"text": "This paper presents a high-level hand feature extraction method for real-time gesture recognition. Firstly, the fingers are modelled as cylindrical objects due to their parallel edge feature. Then a novel algorithm is proposed to directly extract fingers from salient hand edges. Considering the hand geometrical characteristics, the hand posture is segmented and described based on the finger positions, palm center location and wrist position. A weighted radial projection algorithm with the origin at the wrist position is applied to localize each finger. The developed system can not only extract extensional fingers but also flexional fingers with high accuracy. Furthermore, hand rotation and finger angle variation have no effect on the algorithm performance. The orientation of the gesture can be calculated without the aid of arm direction and it would not be disturbed by the bare arm area. Experiments have been performed to demonstrate that the proposed method can directly extract high-level hand feature and estimate hand poses in real-time. & 2015 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "ceedf70c92099fc8612a38f91f2c9507",
"text": "Recent work has demonstrated the value of social media monitoring for health surveillance (e.g., tracking influenza or depression rates). It is an open question whether such data can be used to make causal inferences (e.g., determining which activities lead to increased depression rates). Even in traditional, restricted domains, estimating causal effects from observational data is highly susceptible to confounding bias. In this work, we estimate the effect of exercise on mental health from Twitter, relying on statistical matching methods to reduce confounding bias. We train a text classifier to estimate the volume of a user’s tweets expressing anxiety, depression, or anger, then compare two groups: those who exercise regularly (identified by their use of physical activity trackers like Nike+), and a matched control group. We find that those who exercise regularly have significantly fewer tweets expressing depression or anxiety; there is no significant difference in rates of tweets expressing anger. We additionally perform a sensitivity analysis to investigate how the many experimental design choices in such a study impact the final conclusions, including the quality of the classifier and the construction of the control group.",
"title": ""
},
{
"docid": "3eb8a99236905f59af8a32e281189925",
"text": "F2FS is a Linux file system designed to perform well on modern flash storage devices. The file system builds on append-only logging and its key design decisions were made with the characteristics of flash storage in mind. This paper describes the main design ideas, data structures, algorithms and the resulting performance of F2FS. Experimental results highlight the desirable performance of F2FS; on a state-of-the-art mobile system, it outperforms EXT4 under synthetic workloads by up to 3.1× (iozone) and 2× (SQLite). It reduces elapsed time of several realistic workloads by up to 40%. On a server system, F2FS is shown to perform better than EXT4 by up to 2.5× (SATA SSD) and 1.8× (PCIe SSD).",
"title": ""
}
] | scidocsrr |
e60df0a203c3d0a5152375c99dfb9fe7 | The relationship between social network usage and some personality traits | [
{
"docid": "7ede96303aa3c7f98f60cb545d51ccae",
"text": "The explosion in social networking sites such as MySpace, Facebook, Bebo and Friendster is widely regarded as an exciting opportunity, especially for youth. Yet the public response tends to be one of puzzled dismay regarding, supposedly, a generation with many friends but little sense of privacy and a narcissistic fascination with self-display. This article explores teenagers” practices of social networking in order to uncover the subtle connections between online opportunity and risk. While younger teenagers relish the opportunities to continuously recreate a highly decorated, stylistically elaborate identity, older teenagers favour a plain aesthetic that foregrounds their links to others, thus expressing a notion of identity lived through authentic relationships. The article further contrasts teenagers” graded conception of “friends” with the binary 1 Published as Livingstone, S. (2008) Taking risky opportunities in youthful content creation: teenagers’ use of social networking sites for intimacy, privacy and self-expression. New Media & Society, 10(3): 393-411. Available in Sage Journal Online (Sage Publications Ltd. – All rights reserved): http://nms.sagepub.com/content/10/3/393.abstract 2 Thanks to the Research Council of Norway for funding the Mediatized Stories: Mediation Perspectives On Digital Storytelling Among Youth of which this project is part. I also thank David Brake, Shenja van der Graaf, Angela Jones, Ellen Helsper, Maria Kyriakidou, Annie Mullins, Toshie Takahashi, and two anonymous reviewers for their comments on an earlier version of this article. Last, thanks to the teenagers who participated in this project. 3 Sonia Livingstone is Professor of Social Psychology in the Department of Media and Communications at the London School of Economics and Political Science. She is author or editor of ten books and 100+ academic articles and chapters in the fields of media audiences, children and the internet, domestic contexts of media use and media literacy. Recent books include Young People and New Media (Sage, 2002), The Handbook of New Media (edited, with Leah Lievrouw, Sage, 2006), and Public Connection? Media Consumption and the Presumption of Attention (with Nick Couldry and Tim Markham, Palgrave, 2007). She currently directs the thematic research network, EU Kids Online, for the EC’s Safer Internet Plus programme. Email [email protected]",
"title": ""
},
{
"docid": "e66fb8ed9e26b058a419d34d9c015a4c",
"text": "Children and adolescents now communicate online to form and/or maintain relationships with friends, family, and strangers. Relationships in \"real life\" are important for children's and adolescents' psychosocial development; however, they can be difficult for those who experience feelings of loneliness and/or social anxiety. The aim of this study was to investigate differences in usage of online communication patterns between children and adolescents with and without self-reported loneliness and social anxiety. Six hundred twenty-six students ages 10 to 16 years completed a survey on the amount of time they spent communicating online, the topics they discussed, the partners they engaged with, and their purposes for communicating over the Internet. Participants were administered a shortened version of the UCLA Loneliness Scale and an abbreviated subscale of the Social Anxiety Scale for Adolescents (SAS-A). Additionally, age and gender differences in usage of the online communication patterns were examined across the entire sample. Findings revealed that children and adolescents who self-reported being lonely communicated online significantly more frequently about personal and intimate topics than did those who did not self-report being lonely. The former were motivated to use online communication significantly more frequently to compensate for their weaker social skills to meet new people. Results suggest that Internet usage allows them to fulfill critical needs of social interactions, self-disclosure, and identity exploration. Future research, however, should explore whether or not the benefits derived from online communication may also facilitate lonely children's and adolescents' offline social relationships.",
"title": ""
}
] | [
{
"docid": "97abbb650710386d1e28533e8134c42c",
"text": "Airway pressure limitation is now a largely accepted strategy in adult respiratory distress syndrome (ARDS) patients; however, some debate persists about the exact level of plateau pressure which can be safely used. The objective of the present study was to examine if the echocardiographic evaluation of right ventricular function performed in ARDS may help to answer to this question. For more than 20 years, we have regularly monitored right ventricular function by echocardiography in ARDS patients, during two different periods, a first (1980–1992) where airway pressure was not limited, and a second (1993–2006) where airway pressure was limited. By pooling our data, we can observe the effect of a large range of plateau pressure upon mortality rate and incidence of acute cor pulmonale. In this whole group of 352 ARDS patients, mortality rate and incidence of cor pulmonale were 80 and 56%, respectively, when plateau pressure was > 35 cmH2O; 42 and 32%, respectively, when plateau pressure was between 27 and 35 cmH2O; and 30 and 13%, respectively, when plateau pressure was < 27 cmH2O. Moreover, a clear interaction between plateau pressure and cor pulmonale was evidenced: whereas the odd ratio of dying for an increase in plateau pressure from 18–26 to 27–35 cm H2O in patients without cor pulmonale was 1.05 (p = 0.635), it was 3.32 in patients with cor pulmonale (p < 0.034). We hypothesize that monitoring of right ventricular function by echocardiography at bedside might help to control the safety of plateau pressure used in ARDS.",
"title": ""
},
{
"docid": "ff95e468402fde74e334b83e2a1f1d23",
"text": "The composition of fatty acids in the diets of both human and domestic animal species can regulate inflammation through the biosynthesis of potent lipid mediators. The substrates for lipid mediator biosynthesis are derived primarily from membrane phospholipids and reflect dietary fatty acid intake. Inflammation can be exacerbated with intake of certain dietary fatty acids, such as some ω-6 polyunsaturated fatty acids (PUFA), and subsequent incorporation into membrane phospholipids. Inflammation, however, can be resolved with ingestion of other fatty acids, such as ω-3 PUFA. The influence of dietary PUFA on phospholipid composition is influenced by factors that control phospholipid biosynthesis within cellular membranes, such as preferential incorporation of some fatty acids, competition between newly ingested PUFA and fatty acids released from stores such as adipose, and the impacts of carbohydrate metabolism and physiological state. The objective of this review is to explain these factors as potential obstacles to manipulating PUFA composition of tissue phospholipids by specific dietary fatty acids. A better understanding of the factors that influence how dietary fatty acids can be incorporated into phospholipids may lead to nutritional intervention strategies that optimize health.",
"title": ""
},
{
"docid": "721b0ac6cc52ea434e51d95376cf0a60",
"text": "The recent years have witnessed the rise of accurate but obscure decision systems which hide the logic of their internal decision processes to the users. The lack of explanations for the decisions of black box systems is a key ethical issue, and a limitation to the adoption of machine learning components in socially sensitive and safety-critical contexts. In this paper we focus on the problem of black box outcome explanation, i.e., explaining the reasons of the decision taken on a specific instance. We propose LORE, an agnostic method able to provide interpretable and faithful explanations. LORE first leans a local interpretable predictor on a synthetic neighborhood generated by a genetic algorithm. Then it derives from the logic of the local interpretable predictor a meaningful explanation consisting of: a decision rule, which explains the reasons of the decision; and a set of counterfactual rules, suggesting the changes in the instance’s features that lead to a different outcome. Wide experiments show that LORE outperforms existing methods and baselines both in the quality of explanations and in the accuracy in mimicking the black box.",
"title": ""
},
{
"docid": "de9aa1b5c6e61da518e87a55d02c45e9",
"text": "A novel type of dual-mode microstrip bandpass filter using degenerate modes of a meander loop resonator has been developed for miniaturization of high selectivity narrowband microwave bandpass filters. A filter of this type having a 2.5% bandwidth at 1.58 GHz was designed and fabricated. The measured filter performance is presented.",
"title": ""
},
{
"docid": "1986179d7d985114fa14bbbe01770d8a",
"text": "A low-power consumption, small-size smart antenna, named electronically steerable parasitic array radiator (ESPAR), has been designed. Beamforming is achieved by tuning the load reactances at parasitic elements surrounding the active central element. A fast beamforming algorithm based on simultaneous perturbation stochastic approximation with a maximum cross correlation coefficient criterion is proposed. The simulation and experimental results validate the algorithm. In an environment where the signal-to-interference-ratio is 0 dB, the algorithm converges within 50 iterations and achieves an output signal-to-interference-plus-noise-ratio of 10 dB. With the fast beamforming ability and its low-power consumption attribute, the ESPAR antenna makes the mass deployment of smart antenna technologies practical.",
"title": ""
},
{
"docid": "88d377a1317eb45b8650947af5883255",
"text": "Social entrepreneurship has raised increasing interest among scholars, yet we still know relatively little about the particular dynamics and processes involved. This paper aims at contributing to the field of social entrepreneurship by clarifying key elements, providing working definitions, and illuminating the social entrepreneurship process. In the first part of the paper we review the existing literature. In the second part we develop a model on how intentions to create a social venture –the tangible outcome of social entrepreneurship– get formed. Combining insights from traditional entrepreneurship literature and anecdotal evidence in the field of social entrepreneurship, we propose that behavioral intentions to create a social venture are influenced, first, by perceived social venture desirability, which is affected by attitudes such as empathy and moral judgment, and second, by perceived social venture feasibility, which is facilitated by social support and self-efficacy beliefs.",
"title": ""
},
{
"docid": "c117da74c302d9e108970854d79e54fd",
"text": "Entailment recognition is a primary generic task in natural language inference, whose focus is to detect whether the meaning of one expression can be inferred from the meaning of the other. Accordingly, many NLP applications would benefit from high coverage knowledgebases of paraphrases and entailment rules. To this end, learning such knowledgebases from the Web is especially appealing due to its huge size as well as its highly heterogeneous content, allowing for a more scalable rule extraction of various domains. However, the scalability of state-of-the-art entailment rule acquisition approaches from the Web is still limited. We present a fully unsupervised learning algorithm for Webbased extraction of entailment relations. We focus on increased scalability and generality with respect to prior work, with the potential of a large-scale Web-based knowledgebase. Our algorithm takes as its input a lexical–syntactic template and searches the Web for syntactic templates that participate in an entailment relation with the input template. Experiments show promising results, achieving performance similar to a state-of-the-art unsupervised algorithm, operating over an offline corpus, but with the benefit of learning rules for different domains with no additional effort.",
"title": ""
},
{
"docid": "96053a9bd2faeff5ddf61f15f2b989c4",
"text": "Poly(vinyl alcohol) cryogel, PVA-C, is presented as a tissue-mimicking material, suitable for application in magnetic resonance (MR) imaging and ultrasound imaging. A 10% by weight poly(vinyl alcohol) in water solution was used to form PVA-C, which is solidified through a freeze-thaw process. The number of freeze-thaw cycles affects the properties of the material. The ultrasound and MR imaging characteristics were investigated using cylindrical samples of PVA-C. The speed of sound was found to range from 1520 to 1540 m s(-1), and the attenuation coefficients were in the range of 0.075-0.28 dB (cm MHz)(-1). T1 and T2 relaxation values were found to be 718-1034 ms and 108-175 ms, respectively. We also present applications of this material in an anthropomorphic brain phantom, a multi-volume stenosed vessel phantom and breast biopsy phantoms. Some suggestions are made for how best to handle this material in the phantom design and development process.",
"title": ""
},
{
"docid": "6465daca71e18cb76ec5442fb94f625a",
"text": "In this paper, we show how an open-source, language-independent proofreading tool has been built. Many languages lack contextual proofreading tools; for many others, only partial solutions are available. Using existing, largely language-independent tools and collaborative processes it is possible to develop a practical style and grammar checker and to fight the digital divide in countries where commercial linguistic application software is unavailable or too expensive for average users. The described solution depends on relatively easily available language resources and does not require a fully formalized grammar nor a deep parser, yet it can detect many frequent context-dependent spelling mistakes, as well as grammatical, punctuation, usage, and stylistic errors. Copyright q 2010 John Wiley & Sons, Ltd.",
"title": ""
},
{
"docid": "2c289744ea8ae9d8f0c6ce4ba356b6cb",
"text": "The mission of the IPTS is to provide customer-driven support to the EU policy-making process by researching science-based responses to policy challenges that have both a socioeconomic and a scientific or technological dimension. Legal Notice Neither the European Commission nor any person acting on behalf of the Commission is responsible for the use which might be made of this publication. (*) Certain mobile telephone operators do not allow access to 00 800 numbers or these calls may be billed.",
"title": ""
},
{
"docid": "8d79675b0db5d84251bea033808396c3",
"text": "This paper discusses verification and validation of simulation models. The different approaches to deciding model validity am presented; how model verification and validation relate to the model development process are discussed; various validation techniques are defined, conceptual model validity, model verification, operational validity, and data validity are described; ways to document results are given; and a recommended procedure is presented.",
"title": ""
},
{
"docid": "61bb811aa336e77d2549c51939f9668d",
"text": "Policy languages (such as privacy and rights) have had little impact on the wider community. Now that Social Networks have taken off, the need to revisit Policy languages and realign them towards Social Networks requirements has become more apparent. One such language is explored as to its applicability to the Social Networks masses. We also argue that policy languages alone are not sufficient and thus they should be paired with reasoning mechanisms to provide precise and unambiguous execution models of the policies. To this end we propose a computationally oriented model to represent, reason with and execute policies for Social Networks.",
"title": ""
},
{
"docid": "713c7761ecba317bdcac451fcc60e13d",
"text": "We describe a method for automatically transcribing guitar tablatures from audio signals in accordance with the player's proficiency for use as support for a guitar player's practice. The system estimates the multiple pitches in each time frame and the optimal fingering considering playability and player's proficiency. It combines a conventional multipitch estimation method with a basic dynamic programming method. The difficulty of the fingerings can be changed by tuning the parameter representing the relative weights of the acoustical reproducibility and the fingering easiness. Experiments conducted using synthesized guitar audio signals to evaluate the transcribed tablatures in terms of the multipitch estimation accuracy and fingering easiness demonstrated that the system can simplify the fingering with higher precision of multipitch estimation results than the conventional method.",
"title": ""
},
{
"docid": "b2e62194ce1eb63e0d13659a546db84b",
"text": "The rapid advance of mobile computing technology and wireless networking, there is a significant increase of mobile subscriptions. This drives a strong demand for mobile cloud applications and services for mobile device users. This brings out a great business and research opportunity in mobile cloud computing (MCC). This paper first discusses the market trend and related business driving forces and opportunities. Then it presents an overview of MCC in terms of its concepts, distinct features, research scope and motivations, as well as advantages and benefits. Moreover, it discusses its opportunities, issues and challenges. Furthermore, the paper highlights a research roadmap for MCC.",
"title": ""
},
{
"docid": "3c33528735b53a4f319ce4681527c163",
"text": "Within the past two years, important advances have been made in modeling credit risk at the portfolio level. Practitioners and policy makers have invested in implementing and exploring a variety of new models individually. Less progress has been made, however, with comparative analyses. Direct comparison often is not straightforward, because the different models may be presented within rather different mathematical frameworks. This paper offers a comparative anatomy of two especially influential benchmarks for credit risk models, J.P. Morgan’s CreditMetrics and Credit Suisse Financial Product’s CreditRisk+. We show that, despite differences on the surface, the underlying mathematical structures are similar. The structural parallels provide intuition for the relationship between the two models and allow us to describe quite precisely where the models differ in functional form, distributional assumptions, and reliance on approximation formulae. We then design simulation exercises which evaluate the effect of each of these differences individually. JEL Codes: G31, C15, G11 ∗The views expressed herein are my own and do not necessarily reflect those of the Board of Governors or its staff. I would like to thank David Jones for drawing my attention to this issue, and for his helpful comments. I am also grateful to Mark Carey for data and advice useful in calibration of the models, and to Chris Finger and Tom Wilde for helpful comments. Please address correspondence to the author at Division of Research and Statistics, Mail Stop 153, Federal Reserve Board, Washington, DC 20551, USA. Phone: (202)452-3705. Fax: (202)452-5295. Email: 〈[email protected]〉. Over the past decade, financial institutions have developed and implemented a variety of sophisticated models of value-at-risk for market risk in trading portfolios. These models have gained acceptance not only among senior bank managers, but also in amendments to the international bank regulatory framework. Much more recently, important advances have been made in modeling credit risk in lending portfolios. The new models are designed to quantify credit risk on a portfolio basis, and thus have application in control of risk concentration, evaluation of return on capital at the customer level, and more active management of credit portfolios. Future generations of today’s models may one day become the foundation for measurement of regulatory capital adequacy. Two of the models, J.P. Morgan’s CreditMetrics and Credit Suisse Financial Product’s CreditRisk+, have been released freely to the public since 1997 and have quickly become influential benchmarks. Practitioners and policy makers have invested in implementing and exploring each of the models individually, but have made less progress with comparative analyses. The two models are intended to measure the same risks, but impose different restrictions and distributional assumptions, and suggest different techniques for calibration and solution. Thus, given the same portfolio of credit exposures, the two models will, in general, yield differing evaluations of credit risk. Determining which features of the models account for differences in output would allow us a better understanding of the sensitivity of the models to the particular assumptions they employ. Unfortunately, direct comparison of the models is not straightforward, because the two models are presented within rather different mathematical frameworks. The CreditMetrics model is familiar to econometricians as an ordered probit model. Credit events are driven by movements in underlying unobserved latent variables. The latent variables are assumed to depend on external “risk factors.” Common dependence on the same risk factors gives rise to correlations in credit events across obligors. The CreditRisk+ model is based instead on insurance industry models of event risk. Instead of a latent variable, each obligor has a default probability. The default probabilities are not constant over time, but rather increase or decrease in response to background macroeconomic factors. To the extent that two obligors are sensitive to the same set of background factors, their default probabilities will move together. These co-movements in probability give rise to correlations in defaults. CreditMetrics and CreditRisk+ may serve essentially the same function, but they appear to be constructed quite differently. This paper offers a comparative anatomy of CreditMetrics and CreditRisk+. We show that, despite differences on the surface, the underlying mathematical structures are similar. The structural parallels provide intuition for the relationship between the two models and allow us to describe quite precisely where the models differ in functional form, distributional assumptions, and reliance on approximation formulae. We can then design simulation exercises which evaluate the effect of these differences individually. We proceed as follows. Section 1 presents a summary of the CreditRisk+ model, and introduces a restricted version of CreditMetrics. The restrictions are imposed to facilitate direct comparison of CreditMetrics and CreditRisk+. While some of the richness of the full CreditMetrics implementation is sacrificed, the essential mathematical characteristics of the model are preserved. Our",
"title": ""
},
{
"docid": "564185f1eaa04d4d968ffcae05f030f5",
"text": "Municipal solid waste is a major challenge facing developing countries [1]. Amount of waste generated by developing countries is increasing as a result of urbanisation and economic growth [2]. In Africa and other developing countries waste is disposed of in poorly managed landfills, controlled and uncontrolled dumpsites increasing environmental health risks [3]. Households have a major role to play in reducing the amount of waste sent to landfills [4]. Recycling is accepted by developing and developed countries as one of the best solution in municipal solid waste management [5]. Households influence the quality and amount of recyclable material recovery [1]. Separation of waste at source can reduce contamination of recyclable waste material. Households are the key role players in ensuring that waste is separated at source and their willingness to participate in source separation of waste should be encouraged by municipalities and local regulatory authorities [6,7].",
"title": ""
},
{
"docid": "f249a6089a789e52eeadc8ae16213bc1",
"text": "We have collected a new face data set that will facilitate research in the problem of frontal to profile face verification `in the wild'. The aim of this data set is to isolate the factor of pose variation in terms of extreme poses like profile, where many features are occluded, along with other `in the wild' variations. We call this data set the Celebrities in Frontal-Profile (CFP) data set. We find that human performance on Frontal-Profile verification in this data set is only slightly worse (94.57% accuracy) than that on Frontal-Frontal verification (96.24% accuracy). However we evaluated many state-of-the-art algorithms, including Fisher Vector, Sub-SML and a Deep learning algorithm. We observe that all of them degrade more than 10% from Frontal-Frontal to Frontal-Profile verification. The Deep learning implementation, which performs comparable to humans on Frontal-Frontal, performs significantly worse (84.91% accuracy) on Frontal-Profile. This suggests that there is a gap between human performance and automatic face recognition methods for large pose variation in unconstrained images.",
"title": ""
},
{
"docid": "89ae73a8337870e8ef5e078de7bf2f58",
"text": "In grid connected photovoltaic (PV) systems, maximum power point tracking (MPPT) algorithm plays an important role in optimizing the solar energy efficiency. In this paper, the new artificial neural network (ANN) based MPPT method has been proposed for searching maximum power point (MPP) fast and exactly. For the first time, the combined method is proposed, which is established on the ANN-based PV model method and incremental conductance (IncCond) method. The advantage of ANN-based PV model method is the fast MPP approximation base on the ability of ANN according the parameters of PV array that used. The advantage of IncCond method is the ability to search the exactly MPP based on the feedback voltage and current but don't care the characteristic on PV array‥ The effectiveness of proposed algorithm is validated by simulation using Matlab/ Simulink and experimental results using kit field programmable gate array (FPGA) Virtex II pro of Xilinx.",
"title": ""
},
{
"docid": "9737feb4befdaf995b1f9e88535577ec",
"text": "This paper addresses the problem of detecting the presence of malware that leaveperiodictraces innetworktraffic. This characteristic behavior of malware was found to be surprisingly prevalent in a parallel study. To this end, we propose a visual analytics solution that supports both automatic detection and manual inspection of periodic signals hidden in network traffic. The detected periodic signals are visually verified in an overview using a circular graph and two stacked histograms as well as in detail using deep packet inspection. Our approach offers the capability to detect complex periodic patterns, but avoids the unverifiability issue often encountered in related work. The periodicity assumption imposed on malware behavior is a relatively weak assumption, but initial evaluations with a simulated scenario as well as a publicly available network capture demonstrate its applicability.",
"title": ""
},
{
"docid": "ac529a455bcefa58abafa6c679bec2b4",
"text": "This article presents near-optimal guarantees for stable and robust image recovery from undersampled noisy measurements using total variation minimization. In particular, we show that from O(s log(N)) nonadaptive linear measurements, an image can be reconstructed to within the best s-term approximation of its gradient up to a logarithmic factor, and this factor can be removed by taking slightly more measurements. Along the way, we prove a strengthened Sobolev inequality for functions lying in the null space of suitably incoherent matrices.",
"title": ""
}
] | scidocsrr |
8d85a5075e6ae5ee69d0ad8f11759355 | Contactless payment systems based on RFID technology | [
{
"docid": "8ae12d8ef6e58cb1ac376eb8c11cd15a",
"text": "This paper surveys recent technical research on the problems of privacy and security for radio frequency identification (RFID). RFID tags are small, wireless devices that help identify objects and people. Thanks to dropping cost, they are likely to proliferate into the billions in the next several years-and eventually into the trillions. RFID tags track objects in supply chains, and are working their way into the pockets, belongings, and even the bodies of consumers. This survey examines approaches proposed by scientists for privacy protection and integrity assurance in RFID systems, and treats the social and technical context of their work. While geared toward the nonspecialist, the survey may also serve as a reference for specialist readers.",
"title": ""
}
] | [
{
"docid": "ab08118b53dd5eee3579260e8b23a9c5",
"text": "We have trained a deep (convolutional) neural network to predict the ground-state energy of an electron in four classes of confining two-dimensional electrostatic potentials. On randomly generated potentials, for which there is no analytic form for either the potential or the ground-state energy, the neural network model was able to predict the ground-state energy to within chemical accuracy, with a median absolute error of 1.49 mHa. We also investigate the performance of the model in predicting other quantities such as the kinetic energy and the first excited-state energy of random potentials. While we demonstrated this approach on a simple, tractable problem, the transferability and excellent performance of the resulting model suggests further applications of deep neural networks to problems of electronic structure.",
"title": ""
},
{
"docid": "d8bb742d4d341a4919132408100fcfa5",
"text": "In this study we represent malware as opcode sequences and detect it using a deep belief network (DBN). Compared with traditional shallow neural networks, DBNs can use unlabeled data to pretrain a multi-layer generative model, which can better represent the characteristics of data samples. We compare the performance of DBNs with that of three baseline malware detection models, which use support vector machines, decision trees, and the k-nearest neighbor algorithm as classifiers. The experiments demonstrate that the DBN model provides more accurate detection than the baseline models. When additional unlabeled data are used for DBN pretraining, the DBNs perform better than the other detection models. We also use the DBNs as an autoencoder to extract the feature vectors of executables. The experiments indicate that the autoencoder can effectively model the underlying structure of input data and significantly reduce the dimensions of feature vectors.",
"title": ""
},
{
"docid": "f6c7cf332ad766a0f915ddcace8d5a83",
"text": "Despite the recent trend of increasingly large datasets for object detection, there still exist many classes with few training examples. To overcome this lack of training data for certain classes, we propose a novel way of augmenting the training data for each class by borrowing and transforming examples from other classes. Our model learns which training instances from other classes to borrow and how to transform the borrowed examples so that they become more similar to instances from the target class. Our experimental results demonstrate that our new object detector, with borrowed and transformed examples, improves upon the current state-of-the-art detector on the challenging SUN09 object detection dataset. Thesis Supervisor: Antonio Torralba Title: Associate Professor of Electrical Engineering and Computer Science",
"title": ""
},
{
"docid": "2a45f4ed21d9534a937129532cb32020",
"text": "BACKGROUND\nCore stability training has grown in popularity over 25 years, initially for back pain prevention or therapy. Subsequently, it developed as a mode of exercise training for health, fitness and sport. The scientific basis for traditional core stability exercise has recently been questioned and challenged, especially in relation to dynamic athletic performance. Reviews have called for clarity on what constitutes anatomy and function of the core, especially in healthy and uninjured people. Clinical research suggests that traditional core stability training is inappropriate for development of fitness for heath and sports performance. However, commonly used methods of measuring core stability in research do not reflect functional nature of core stability in uninjured, healthy and athletic populations. Recent reviews have proposed a more dynamic, whole body approach to training core stabilization, and research has begun to measure and report efficacy of these modes training. The purpose of this study was to assess extent to which these developments have informed people currently working and participating in sport.\n\n\nMETHODS\nAn online survey questionnaire was developed around common themes on core stability training as defined in the current scientific literature and circulated to a sample population of people working and participating in sport. Survey results were assessed against key elements of the current scientific debate.\n\n\nRESULTS\nPerceptions on anatomy and function of the core were gathered from a representative cohort of athletes, coaches, sports science and sports medicine practitioners (n = 241), along with their views on effectiveness of various current and traditional exercise training modes. Most popular method of testing and measuring core function was subjective assessment through observation (43%), while a quarter (22%) believed there was no effective method of measurement. Perceptions of people in sport reflect the scientific debate, and practitioners have adopted a more functional approach to core stability training. There was strong support for loaded, compound exercises performed upright, compared to moderate support for traditional core stability exercises. Half of the participants (50%) in the survey, however, still support a traditional isolation core stability training.\n\n\nCONCLUSION\nPerceptions in applied practice on core stability training for dynamic athletic performance are aligned to a large extent to the scientific literature.",
"title": ""
},
{
"docid": "8860af067ed1af9aba072d85f3e6171b",
"text": "In this paper, we introduce a new channel pruning method to accelerate very deep convolutional neural networks. Given a trained CNN model, we propose an iterative two-step algorithm to effectively prune each layer, by a LASSO regression based channel selection and least square reconstruction. We further generalize this algorithm to multi-layer and multi-branch cases. Our method reduces the accumulated error and enhance the compatibility with various architectures. Our pruned VGG-16 achieves the state-of-the-art results by 5× speed-up along with only 0.3% increase of error. More importantly, our method is able to accelerate modern networks like ResNet, Xception and suffers only 1.4%, 1.0% accuracy loss under 2× speedup respectively, which is significant.",
"title": ""
},
{
"docid": "bb65f9fec86c2f66b5b61be527b2bdf4",
"text": "Success in natural language inference (NLI) should require a model to understand both lexical and compositional semantics. However, through adversarial evaluation, we find that several state-of-the-art models with diverse architectures are over-relying on the former and fail to use the latter. Further, this compositionality unawareness is not reflected via standard evaluation on current datasets. We show that removing RNNs in existing models or shuffling input words during training does not induce large performance loss despite the explicit removal of compositional information. Therefore, we propose a compositionality-sensitivity testing setup that analyzes models on natural examples from existing datasets that cannot be solved via lexical features alone (i.e., on which a bag-of-words model gives a high probability to one wrong label), hence revealing the models’ actual compositionality awareness. We show that this setup not only highlights the limited compositional ability of current NLI models, but also differentiates model performance based on design, e.g., separating shallow bag-of-words models from deeper, linguistically-grounded tree-based models. Our evaluation setup is an important analysis tool: complementing currently existing adversarial and linguistically driven diagnostic evaluations, and exposing opportunities for future work on evaluating models’ compositional understanding.",
"title": ""
},
{
"docid": "e61a0ba24db737d42a730d5738583ffa",
"text": "We present a logical formalism for expressing properties of continuous time Markov chains. The semantics for such properties arise as a natural extension of previous work on discrete time Markov chains to continuous time. The major result is that the veriication problem is decidable; this is shown using results in algebraic and transcendental number theory.",
"title": ""
},
{
"docid": "ad6763de671234eb48b3629c25ab9113",
"text": "Photovoltaic (PV) system performance is influenced by several factors, including irradiance, temperature, shading, degradation, mismatch losses, soiling, etc. Shading of a PV array, in particular, either complete or partial, can have a significant impact on its power output and energy yield, depending on array configuration, shading pattern, and the bypass diodes incorporated in the PV modules. In this paper, the effect of partial shading on multicrystalline silicon (mc-Si) PV modules is investigated. A PV module simulation model implemented in P-Spice is first employed to quantify the effect of partial shading on the I-V curve and the maximum power point (MPP) voltage and power. Then, generalized formulae are derived, which permit accurate enough evaluation of MPP voltage and power of mc-Si PV modules, without the need to resort to detailed modeling and simulation. The equations derived are validated via experimental results.",
"title": ""
},
{
"docid": "cf9fe52efd734c536d0a7daaf59a9bcd",
"text": "Image-based sequence recognition has been a long-standing research topic in computer vision. In this paper, we investigate the problem of scene text recognition, which is among the most important and challenging tasks in image-based sequence recognition. A novel neural network architecture, which integrates feature extraction, sequence modeling and transcription into a unified framework, is proposed. Compared with previous systems for scene text recognition, the proposed architecture possesses four distinctive properties: (1) It is end-to-end trainable, in contrast to most of the existing algorithms whose components are separately trained and tuned. (2) It naturally handles sequences in arbitrary lengths, involving no character segmentation or horizontal scale normalization. (3) It is not confined to any predefined lexicon and achieves remarkable performances in both lexicon-free and lexicon-based scene text recognition tasks. (4) It generates an effective yet much smaller model, which is more practical for real-world application scenarios. The experiments on standard benchmarks, including the IIIT-5K, Street View Text and ICDAR datasets, demonstrate the superiority of the proposed algorithm over the prior arts. Moreover, the proposed algorithm performs well in the task of image-based music score recognition, which evidently verifies the generality of it.",
"title": ""
},
{
"docid": "2e65ae613aa80aac27d5f8f6e00f5d71",
"text": "Industrial systems, e.g., wind turbines, generate big amounts of data from reliable sensors with high velocity. As it is unfeasible to store and query such big amounts of data, only simple aggregates are currently stored. However, aggregates remove fluctuations and outliers that can reveal underlying problems and limit the knowledge to be gained from historical data. As a remedy, we present the distributed Time Series Management System (TSMS) ModelarDB that uses models to store sensor data. We thus propose an online, adaptive multi-model compression algorithm that maintains data values within a user-defined error bound (possibly zero). We also propose (i) a database schema to store time series as models, (ii) methods to push-down predicates to a key-value store utilizing this schema, (iii) optimized methods to execute aggregate queries on models, (iv) a method to optimize execution of projections through static code-generation, and (v) dynamic extensibility that allows new models to be used without recompiling the TSMS. Further, we present a general modular distributed TSMS architecture and its implementation, ModelarDB, as a portable library, using Apache Spark for query processing and Apache Cassandra for storage. An experimental evaluation shows that, unlike current systems, ModelarDB hits a sweet spot and offers fast ingestion, good compression, and fast, scalable online aggregate query processing at the same time. This is achieved by dynamically adapting to data sets using multiple models. The system degrades gracefully as more outliers occur and the actual errors are much lower than the bounds. PVLDB Reference Format: Søren Kejser Jensen, Torben Bach Pedersen, Christian Thomsen. ModelarDB: Modular Model-Based Time Series Management with Spark and Cassandra. PVLDB, 11(11): 1688-1701, 2018. DOI: https://doi.org/10.14778/3236187.3236215",
"title": ""
},
{
"docid": "c7237823182b47cc03c70937bbbb0be0",
"text": "To discover patterns in historical data, climate scientists have applied various clustering methods with the goal of identifying regions that share some common climatological behavior. However, past approaches are limited by the fact that they either consider only a single time period (snapshot) of multivariate data, or they consider only a single variable by using the time series data as multi-dimensional feature vector. In both cases, potentially useful information may be lost. Moreover, clusters in high-dimensional data space can be difficult to interpret, prompting the need for a more effective data representation. We address both of these issues by employing a complex network (graph) to represent climate data, a more intuitive model that can be used for analysis while also having a direct mapping to the physical world for interpretation. A cross correlation function is used to weight network edges, thus respecting the temporal nature of the data, and a community detection algorithm identifies multivariate clusters. Examining networks for consecutive periods allows us to study structural changes over time. We show that communities have a climatological interpretation and that disturbances in structure can be an indicator of climate events (or lack thereof). Finally, we discuss how this model can be applied for the discovery of more complex concepts such as unknown teleconnections or the development of multivariate climate indices and predictive insights.",
"title": ""
},
{
"docid": "552d9591ea3bebb0316fb4111707b3a3",
"text": "The long jump has been widely studied in recent years. Two models exist in the literature which define the relationship between selected variables that affect performance. Both models suggest that the critical phase of the long jump event is the touch-down to take-off phase, as it is in this phase that the necessary vertical velocity is generated. Many three dimensional studies of the long jump exist, but the only studies to have reported detailed data on this phase were two-dimensional in nature. In these, the poor relationships obtained between key variables and performance led to the suggestion that there may be some relevant information in data in the third dimension. The aims of this study were to conduct a three-dimensional analysis of the touch-down to take-off phase in the long jump and to explore the interrelationships between key variables. Fourteen male long jumpers were filmed using three-dimensional methods during the finals of the 1994 (n = 8) and 1995 (n = 6) UK National Championships. Various key variables for the long jump were used in a series of correlational and multiple regression analyses. The relationships between key variables when correlated directly one-to-one were generally poor. However, when analysed using a multiple regression approach, a series of variables was identified which supported the general principles outlined in the two models. These variables could be interpreted in terms of speed, technique and strength. We concluded that in the long jump, variables that are important to performance are interdependent and can only be identified by using appropriate statistical techniques. This has implications for a better understanding of the long jump event and it is likely that this finding can be generalized to other technical sports skills.",
"title": ""
},
{
"docid": "cc08118c532cbe4665f8a3ac8b7d5fd7",
"text": "We evaluated the use of gamification to facilitate a student-centered learning environment within an undergraduate Year 2 Personal and Professional Development (PPD) course. In addition to face-to-face classroom practices, an information technology-based gamified system with a range of online learning activities was presented to students as support material. The implementation of the gamified course lasted two academic terms. The subsequent evaluation from a cohort of 136 students indicated that student performance was significantly higher among those who participated in the gamified system than in those who engaged with the nongamified, traditional delivery, while behavioral engagement in online learning activities was positively related to course performance, after controlling for gender, attendance, and Year 1 PPD performance. Two interesting phenomena appeared when we examined the influence of student background: female students participated significantly more in online learning activities than male students, and students with jobs engaged significantly more in online learning activities than students without jobs. The gamified course design advocated in this work may have significant implications for educators who wish to develop engaging technology-mediated learning environments that enhance students’ learning, or for a broader base of professionals who wish to engage a population of potential users, such as managers engaging employees or marketers engaging customers.",
"title": ""
},
{
"docid": "71a262b1c91c89f379527b271e45e86e",
"text": "Geospatial object detection from high spatial resolution (HSR) remote sensing imagery is a heated and challenging problem in the field of automatic image interpretation. Despite convolutional neural networks (CNNs) having facilitated the development in this domain, the computation efficiency under real-time application and the accurate positioning on relatively small objects in HSR images are two noticeable obstacles which have largely restricted the performance of detection methods. To tackle the above issues, we first introduce semantic segmentation-aware CNN features to activate the detection feature maps from the lowest level layer. In conjunction with this segmentation branch, another module which consists of several global activation blocks is proposed to enrich the semantic information of feature maps from higher level layers. Then, these two parts are integrated and deployed into the original single shot detection framework. Finally, we use the modified multi-scale feature maps with enriched semantics and multi-task training strategy to achieve end-to-end detection with high efficiency. Extensive experiments and comprehensive evaluations on a publicly available 10-class object detection dataset have demonstrated the superiority of the presented method.",
"title": ""
},
{
"docid": "f8fc595f60fda530cc7796dbba83481c",
"text": "This paper proposes a pseudo random number generator using Elman neural network. The proposed neural network is a recurrent neural network able to generate pseudo-random numbers from the weight matrices obtained from the layer weights of the Elman network. The proposed method is not computationally demanding and is easy to implement for varying bit sequences. The random numbers generated using our method have been subjected to frequency test and ENT test program. The results show that recurrent neural networks can be used as a pseudo random number generator(prng).",
"title": ""
},
{
"docid": "2e7513624eed605a4e0da539162dd715",
"text": "In the domain of Internet of Things (IoT), applications are modeled to understand and react based on existing contextual and situational parameters. This work implements a management flow for the abstraction of real world objects and virtual composition of those objects to provide IoT services. We also present a real world knowledge model that aggregates constraints defining a situation, which is then used to detect and anticipate future potential situations. It is implemented based on reasoning and machine learning mechanisms. This work showcases a prototype implementation of the architectural framework in a smart home scenario, targeting two functionalities: actuation and automation based on the imposed constraints and thereby responding to situations and also adapting to the user preferences. It thus provides a productive integration of heterogeneous devices, IoT platforms, and cognitive technologies to improve the services provided to the user.",
"title": ""
},
{
"docid": "d09f433d8b9776e45fd3a9516cde004d",
"text": "The review focuses on one growing dimension of health care globalisation - medical tourism, whereby consumers elect to travel across borders or to overseas destinations to receive their treatment. Such treatments include cosmetic and dental surgery; cardio, orthopaedic and bariatric surgery; IVF treatment; and organ and tissue transplantation. The review sought to identify the medical tourist literature for out-of-pocket payments, focusing wherever possible on evidence and experience pertaining to patients in mid-life and beyond. Despite increasing media interest and coverage hard empirical findings pertaining to out-of-pocket medical tourism are rare. Despite a number of countries offering relatively low cost treatments we know very little about many of the numbers and key indicators on medical tourism. The narrative review traverses discussion on medical tourist markets, consumer choice, clinical outcomes, quality and safety, and ethical and legal dimensions. The narrative review draws attention to gaps in research evidence and strengthens the call for more empirical research on the role, process and outcomes of medical tourism. In concluding it makes suggestion for the content of such a strategy.",
"title": ""
},
{
"docid": "0fac1fde74f99bd6b4e9338f54ec41d6",
"text": "This thesis addresses total variation (TV) image restoration and blind image deconvolution. Classical image processing problems, such as deblurring, call for some kind of regularization. Total variation is among the state-of-the-art regularizers, as it provides a good balance between the ability to describe piecewise smooth images and the complexity of the resulting algorithms. In this thesis, we propose a minimization algorithm for TV-based image restoration that belongs to the majorization-minimization class (MM). The proposed algorithm is similar to the known iterative re-weighted least squares (IRSL) approach, although it constitutes an original interpretation of this method from the MM perspective. The problem of choosing the regularization parameter is also addressed in this thesis. A new Bayesian method is introduced to automatically estimate the parameter, by assigning it a non-informative prior, followed by integration based on an approximation of the associated partition function. The proposed minimization problem, also addressed using the MM framework, results on an update rule for the regularization parameter, and can be used with any TV-based image deblurring algorithm. Blind image deconvolution is the third topic of this thesis. We consider the case of linear motion blurs. We propose a new discretization of the motion blur kernel, and a new estimation algorithm to recover the motion blur parameters (orientation and length) from blurred natural images, based on the Radon transform of the spectrum of the blurred images.",
"title": ""
},
{
"docid": "fbd390ed58529fc5dc552d7550168546",
"text": "Recently, tuple-stores have become pivotal structures in many information systems. Their ability to handle large datasets makes them important in an era with unprecedented amounts of data being produced and exchanged. However, these tuple-stores typically rely on structured peer-to-peer protocols which assume moderately stable environments. Such assumption does not always hold for very large scale systems sized in the scale of thousands of machines. In this paper we present a novel approach to the design of a tuple-store. Our approach follows a stratified design based on an unstructured substrate. We focus on this substrate and how the use of epidemic protocols allow reaching high dependability and scalability.",
"title": ""
}
] | scidocsrr |
785e7bc9e4b13685cc55441a65a157d2 | A Bayesian approach to covariance estimation and data fusion | [
{
"docid": "2d787b0deca95ce212e11385ae60c36d",
"text": "In this paper, we introduce three novel distributed Kalman filtering (DKF) algorithms for sensor networks. The first algorithm is a modification of a previous DKF algorithm presented by the author in CDC-ECC '05. The previous algorithm was only applicable to sensors with identical observation matrices which meant the process had to be observable by every sensor. The modified DKF algorithm uses two identical consensus filters for fusion of the sensor data and covariance information and is applicable to sensor networks with different observation matrices. This enables the sensor network to act as a collective observer for the processes occurring in an environment. Then, we introduce a continuous-time distributed Kalman filter that uses local aggregation of the sensor data but attempts to reach a consensus on estimates with other nodes in the network. This peer-to-peer distributed estimation method gives rise to two iterative distributed Kalman filtering algorithms with different consensus strategies on estimates. Communication complexity and packet-loss issues are discussed. The performance and effectiveness of these distributed Kalman filtering algorithms are compared and demonstrated on a target tracking task.",
"title": ""
},
{
"docid": "e9d0c366c241e1fc071d82ca810d1be2",
"text": "The problem of distributed Kalman filtering (DKF) for sensor networks is one of the most fundamental distributed estimation problems for scalable sensor fusion. This paper addresses the DKF problem by reducing it to two separate dynamic consensus problems in terms of weighted measurements and inverse-covariance matrices. These to data fusion problems are solved is a distributed way using low-pass and band-pass consensus filters. Consensus filters are distributed algorithms that allow calculation of average-consensus of time-varying signals. The stability properties of consensus filters is discussed in a companion CDC ’05 paper [24]. We show that a central Kalman filter for sensor networks can be decomposed into n micro-Kalman filters with inputs that are provided by two types of consensus filters. This network of micro-Kalman filters collectively are capable to provide an estimate of the state of the process (under observation) that is identical to the estimate obtained by a central Kalman filter given that all nodes agree on two central sums. Later, we demonstrate that our consensus filters can approximate these sums and that gives an approximate distributed Kalman filtering algorithm. A detailed account of the computational and communication architecture of the algorithm is provided. Simulation results are presented for a sensor network with 200 nodes and more than 1000 links.",
"title": ""
}
] | [
{
"docid": "5931cb779b24065c5ef48451bc46fac4",
"text": "In order to provide a material that can facilitate the modeling and construction of a Furuta pendulum, this paper presents the deduction, step-by-step, of a Furuta pendulum mathematical model by using the Lagrange equations of motion. Later, a mechanical design of the Furuta pendulum is carried out via the software Solid Works and subsequently a prototype is built. Numerical simulations of the Furuta pendulum model are performed via Mat lab-Simulink. Furthermore, the Furuta pendulum prototype built is experimentally tested by using Mat lab-Simulink, Control Desk, and a DS1104 board from dSPACE.",
"title": ""
},
{
"docid": "5b341604b207e80ef444d11a9de82f72",
"text": "Digital deformities continue to be a common ailment among many patients who present to foot and ankle specialists. When conservative treatment fails to eliminate patient complaints, surgical correction remains a viable treatment option. Proximal interphalangeal joint arthrodesis remains the standard procedure among most foot and ankle surgeons. With continued advances in fixation technology and techniques, surgeons continue to have better options for the achievement of excellent digital surgery outcomes. This article reviews current trends in fixation of digital deformities while highlighting pertinent aspects of the physical examination, radiographic examination, and surgical technique.",
"title": ""
},
{
"docid": "c197fcf3042099003f3ed682f7b7f19c",
"text": "Interaction graphs are ubiquitous in many fields such as bioinformatics, sociology and physical sciences. There have been many studies in the literature targeted at studying and mining these graphs. However, almost all of them have studied these graphs from a static point of view. The study of the evolution of these graphs over time can provide tremendous insight on the behavior of entities, communities and the flow of information among them. In this work, we present an event-based characterization of critical behavioral patterns for temporally varying interaction graphs. We use non-overlapping snapshots of interaction graphs and develop a framework for capturing and identifying interesting events from them. We use these events to characterize complex behavioral patterns of individuals and communities over time. We demonstrate the application of behavioral patterns for the purposes of modeling evolution, link prediction and influence maximization. Finally, we present a diffusion model for evolving networks, based on our framework.",
"title": ""
},
{
"docid": "8c0b544b88ebe81ebe4b374a4e08bb5e",
"text": "We study 3D shape modeling from a single image and make contributions to it in three aspects. First, we present Pix3D, a large-scale benchmark of diverse image-shape pairs with pixel-level 2D-3D alignment. Pix3D has wide applications in shape-related tasks including reconstruction, retrieval, viewpoint estimation, etc. Building such a large-scale dataset, however, is highly challenging; existing datasets either contain only synthetic data, or lack precise alignment between 2D images and 3D shapes, or only have a small number of images. Second, we calibrate the evaluation criteria for 3D shape reconstruction through behavioral studies, and use them to objectively and systematically benchmark cutting-edge reconstruction algorithms on Pix3D. Third, we design a novel model that simultaneously performs 3D reconstruction and pose estimation; our multi-task learning approach achieves state-of-the-art performance on both tasks.",
"title": ""
},
{
"docid": "b596be97699686e5e37cab71bee8fe4a",
"text": "The task of selecting project portfolios is an important and recurring activity in many organizations. There are many techniques available to assist in this process, but no integrated framework for carrying it out. This paper simpli®es the project portfolio selection process by developing a framework which separates the work into distinct stages. Each stage accomplishes a particular objective and creates inputs to the next stage. At the same time, users are free to choose the techniques they ®nd the most suitable for each stage, or in some cases to omit or modify a stage if this will simplify and expedite the process. The framework may be implemented in the form of a decision support system, and a prototype system is described which supports many of the related decision making activities. # 1999 Published by Elsevier Science Ltd and IPMA. All rights reserved",
"title": ""
},
{
"docid": "57ca7842e7ab21b51c4069e76121fc26",
"text": "This paper surveys and investigates the strengths and weaknesses of a number of recent approaches to advanced workflow modelling. Rather than inventing just another workflow language, we briefly describe recent workflow languages, and we analyse them with respect to their support for advanced workflow topics. Object Coordination Nets, Workflow Graphs, WorkFlow Nets, and an approach based on Workflow Evolution are described as dedicated workflow modelling approaches. In addition, the Unified Modelling Language as the de facto standard in objectoriented modelling is also investigated. These approaches are discussed with respect to coverage of workflow perspectives and support for flexibility and analysis issues in workflow management, which are today seen as two major areas for advanced workflow support. Given the different goals and backgrounds of the approaches mentioned, it is not surprising that each approach has its specific strengths and weaknesses. We clearly identify these strengths and weaknesses, and we conclude with ideas for combining their best features.",
"title": ""
},
{
"docid": "d93795318775df2c451eaf8c04a764cf",
"text": "The queries issued to search engines are often ambiguous or multifaceted, which requires search engines to return diverse results that can fulfill as many different information needs as possible; this is called search result diversification. Recently, the relational learning to rank model, which designs a learnable ranking function following the criterion of maximal marginal relevance, has shown effectiveness in search result diversification [Zhu et al. 2014]. The goodness of a diverse ranking model is usually evaluated with diversity evaluation measures such as α-NDCG [Clarke et al. 2008], ERR-IA [Chapelle et al. 2009], and D#-NDCG [Sakai and Song 2011]. Ideally the learning algorithm would train a ranking model that could directly optimize the diversity evaluation measures with respect to the training data. Existing relational learning to rank algorithms, however, only train the ranking models by optimizing loss functions that loosely relate to the evaluation measures. To deal with the problem, we propose a general framework for learning relational ranking models via directly optimizing any diversity evaluation measure. In learning, the loss function upper-bounding the basic loss function defined on a diverse ranking measure is minimized. We can derive new diverse ranking algorithms under the framework, and several diverse ranking algorithms are created based on different upper bounds over the basic loss function. We conducted comparisons between the proposed algorithms with conventional diverse ranking methods using the TREC benchmark datasets. Experimental results show that the algorithms derived under the diverse learning to rank framework always significantly outperform the state-of-the-art baselines.",
"title": ""
},
{
"docid": "8b71cb1b7cdaa434ac4b238b97a30e66",
"text": "Research on interoperability of technology-enhanced learning (TEL) repositories throughout the last decade has led to a fragmented landscape of competing approaches, such as metadata schemas and interface mechanisms. However, so far Web-scale integration of resources is not facilitated, mainly due to the lack of take-up of shared principles, datasets and schemas. On the other hand, the Linked Data approach has emerged as the de-facto standard for sharing data on the Web and offers a large potential to solve interoperability issues in the field of TEL. In this paper, we describe a general approach to exploit the wealth of already existing TEL data on the Web by allowing its exposure as Linked Data and by taking into account automated enrichment and interlinking techniques to provide rich and well-interlinked data for the educational domain. This approach has been implemented in the context of the mEducator project where data from a number of open TEL data repositories has been integrated, exposed and enriched by following Linked Data principles.",
"title": ""
},
{
"docid": "61e8deaaa02297ba3edb2eb14ffb7f26",
"text": "Given an edge-weighted graph G and two distinct vertices s and t of G, the next-to-shortest path problem asks for a path from s to t of minimum length among all paths from s to t except the shortest ones. In this article, we consider the version where G is directed and all edge weights are positive. Some properties of the requested path are derived when G is an arbitrary digraph. In addition, if G is planar, an O(n3)-time algorithm is proposed, where n is the number of vertices of G. © 2015 Wiley Periodicals, Inc. NETWORKS, Vol. 000(00), 000–00",
"title": ""
},
{
"docid": "07e2b3550183fd4d2a42591a9726f77c",
"text": "Modern cryptocurrency systems, such as Ethereum, permit complex financial transactions through scripts called smart contracts. These smart contracts are executed many, many times, always without real concurrency. First, all smart contracts are serially executed by miners before appending them to the blockchain. Later, those contracts are serially re-executed by validators to verify that the smart contracts were executed correctly by miners. Serial execution limits system throughput and fails to exploit today's concurrent multicore and cluster architectures. Nevertheless, serial execution appears to be required: contracts share state, and contract programming languages have a serial semantics.\n This paper presents a novel way to permit miners and validators to execute smart contracts in parallel, based on techniques adapted from software transactional memory. Miners execute smart contracts speculatively in parallel, allowing non-conflicting contracts to proceed concurrently, and \"discovering\" a serializable concurrent schedule for a block's transactions, This schedule is captured and encoded as a deterministic fork-join program used by validators to re-execute the miner's parallel schedule deterministically but concurrently.\n Smart contract benchmarks run on a JVM with ScalaSTM show that a speedup of 1.33x can be obtained for miners and 1.69x for validators with just three concurrent threads.",
"title": ""
},
{
"docid": "7c5ce3005c4529e0c34220c538412a26",
"text": "Six studies investigate whether and how distant future time perspective facilitates abstract thinking and impedes concrete thinking by altering the level at which mental representations are construed. In Experiments 1-3, participants who envisioned their lives and imagined themselves engaging in a task 1 year later as opposed to the next day subsequently performed better on a series of insight tasks. In Experiments 4 and 5 a distal perspective was found to improve creative generation of abstract solutions. Moreover, Experiment 5 demonstrated a similar effect with temporal distance manipulated indirectly, by making participants imagine their lives in general a year from now versus tomorrow prior to performance. In Experiment 6, distant time perspective undermined rather than enhanced analytical problem solving.",
"title": ""
},
{
"docid": "ce384939966654196aabbb076326c779",
"text": "We address the problem of detecting duplicate questions in forums, which is an important step towards automating the process of answering new questions. As finding and annotating such potential duplicates manually is very tedious and costly, automatic methods based on machine learning are a viable alternative. However, many forums do not have annotated data, i.e., questions labeled by experts as duplicates, and thus a promising solution is to use domain adaptation from another forum that has such annotations. Here we focus on adversarial domain adaptation, deriving important findings about when it performs well and what properties of the domains are important in this regard. Our experiments with StackExchange data show an average improvement of 5.6% over the best baseline across multiple pairs of domains.",
"title": ""
},
{
"docid": "f33ca4cfba0aab107eb8bd6d3d041b74",
"text": "Deep neural networks (DNNs) require very large amounts of computation both for training and for inference when deployed in the field. A common approach to implementing DNNs is to recast the most computationally expensive operations as general matrix multiplication (GEMM). However, as we demonstrate in this paper, there are a great many different ways to express DNN convolution operations using GEMM. Although different approaches all perform the same number of operations, the size of temorary data structures differs significantly. Convolution of an input matrix with dimensions C × H × W , requires O(KCHW ) additional space using the classical im2col approach. More recently memory-efficient approaches requiring just O(KCHW ) auxiliary space have been proposed. We present two novel GEMM-based algorithms that require just O(MHW ) and O(KW ) additional space respectively, where M is the number of channels in the result of the convolution. These algorithms dramatically reduce the space overhead of DNN convolution, making it much more suitable for memory-limited embedded systems. Experimental evaluation shows that our lowmemory algorithms are just as fast as the best patch-building approaches despite requiring just a fraction of the amount of additional memory. Our low-memory algorithms have excellent data locality which gives them a further edge over patch-building algorithms when multiple cores are used. As a result, our low memory algorithms often outperform the best patch-building algorithms using multiple threads.",
"title": ""
},
{
"docid": "c6e6099599be3cd2d1d87c05635f4248",
"text": "PURPOSE\nThe Food Cravings Questionnaires are among the most often used measures for assessing the frequency and intensity of food craving experiences. However, there is a lack of studies that have examined specific cut-off scores that may indicate pathologically elevated levels of food cravings.\n\n\nMETHODS\nReceiver-Operating-Characteristic analysis was used to determine sensitivity and specificity of scores on the Food Cravings Questionnaire-Trait-reduced (FCQ-T-r) for discriminating between individuals with (n = 43) and without (n = 389) \"food addiction\" as assessed with the Yale Food Addiction Scale 2.0.\n\n\nRESULTS\nA cut-off score of 50 on the FCQ-T-r discriminated between individuals with and without \"food addiction\" with high sensitivity (85%) and specificity (93%).\n\n\nCONCLUSIONS\nFCQ-T-r scores of 50 and higher may indicate clinically relevant levels of trait food craving.\n\n\nLEVEL OF EVIDENCE\nLevel V, descriptive study.",
"title": ""
},
{
"docid": "104c71324594c907f87d483c8c222f0f",
"text": "Operational controls are designed to support the integration of wind and solar power within microgrids. An aggregated model of renewable wind and solar power generation forecast is proposed to support the quantification of the operational reserve for day-ahead and real-time scheduling. Then, a droop control for power electronic converters connected to battery storage is developed and tested. Compared with the existing droop controls, it is distinguished in that the droop curves are set as a function of the storage state-of-charge (SOC) and can become asymmetric. The adaptation of the slopes ensures that the power output supports the terminal voltage while at the same keeping the SOC within a target range of desired operational reserve. This is shown to maintain the equilibrium of the microgrid's real-time supply and demand. The controls are implemented for the special case of a dc microgrid that is vertically integrated within a high-rise host building of an urban area. Previously untapped wind and solar power are harvested on the roof and sides of a tower, thereby supporting delivery to electric vehicles on the ground. The microgrid vertically integrates with the host building without creating a large footprint.",
"title": ""
},
{
"docid": "fd576b16a55c8f6bc4922561ef0d80bd",
"text": "Abs t rad -Th i s paper presents all controllers for the general ~'® control problem (with no assumptions on the plant matrices). Necessary and sufficient conditions for the existence of an ~® controller of any order are given in terms of three Linear Matrix Inequalities (LMIs). Our existence conditions are equivalent to Scherer's results, but with a more elementary derivation. Furthermore, we provide the set of all ~(= controllers explicitly parametrized in the state space using the positive definite solutions to the LMIs. Even under standard assumptions (full rank, etc.), our controller parametrization has an advantage over the Q-parametrization. The freedom Q (a real-rational stable transfer matrix with the ~® norm bounded above by a specified number) is replaced by a constant matrix L of fixed dimension with a norm bound, and the solutions (X, Y) to the LMIs. The inequality formulation converts the existence conditions to a convex feasibility problem, and also the free matrix L and the pair (X, Y) define a finite dimensional design space, as opposed to the infinite dimensional space associated with the Q-parametrization.",
"title": ""
},
{
"docid": "e92ab865f33c7548c21ba99785912d03",
"text": "Given a query graph q and a data graph g, the subgraph isomorphism search finds all occurrences of q in g and is considered one of the most fundamental query types for many real applications. While this problem belongs to NP-hard, many algorithms have been proposed to solve it in a reasonable time for real datasets. However, a recent study has shown, through an extensive benchmark with various real datasets, that all existing algorithms have serious problems in their matching order selection. Furthermore, all algorithms blindly permutate all possible mappings for query vertices, often leading to useless computations. In this paper, we present an efficient and robust subgraph search solution, called TurboISO, which is turbo-charged with two novel concepts, candidate region exploration and the combine and permute strategy (in short, Comb/Perm). The candidate region exploration identifies on-the-fly candidate subgraphs (i.e, candidate regions), which contain embeddings, and computes a robust matching order for each candidate region explored. The Comb/Perm strategy exploits the novel concept of the neighborhood equivalence class (NEC). Each query vertex in the same NEC has identically matching data vertices. During subgraph isomorphism search, Comb/Perm generates only combinations for each NEC instead of permutating all possible enumerations. Thus, if a chosen combination is determined to not contribute to a complete solution, all possible permutations for that combination will be safely pruned. Extensive experiments with many real datasets show that TurboISO consistently and significantly outperforms all competitors by up to several orders of magnitude.",
"title": ""
},
{
"docid": "3f00cb229ea1f64e8b60bebaff0d99fe",
"text": "It is widely known that in wireless sensor networks (WSN), energy efficiency is of utmost importance. WSN need to be energy efficient but also need to provide better performance, particularly latency. A common protocol design guideline has been to trade off some performance metrics such as throughput and delay for energy. This paper presents a novel MAC (Express Energy Efficient Media Access Control) protocol that not only preserves the energy efficiency of current alternatives but also coordinates the transfer of packets from source to destination in such a way that latency and jitter are improved considerably. Our simulations show how EX-MAC (Express Energy Efficient MAC) outperforms the well-known S-MAC protocols in several performance metrics.",
"title": ""
},
{
"docid": "2ba1321f64fc8567fd70c030ea49b9e0",
"text": "Datasets originating from social networks are very valuable to many fields such as sociology and psychology. However, the supports from technical perspective are far from enough, and specific approaches are urgently in need. This paper applies data mining to psychology area for detecting depressed users in social network services. Firstly, a sentiment analysis method is proposed utilizing vocabulary and man-made rules to calculate the depression inclination of each micro-blog. Secondly, a depression detection model is constructed based on the proposed method and 10 features of depressed users derived from psychological research. Then 180 users and 3 kinds of classifiers are used to verify the model, whose precisions are all around 80%. Also, the significance of each feature is analyzed. Lastly, an application is developed within the proposed model for mental health monitoring online. This study is supported by some psychologists, and facilitates them in data-centric aspect in turn.",
"title": ""
},
{
"docid": "7edddf437e1759b8b13821670f52f4ba",
"text": "This paper presents the design, implementation and validation of the three-wheel holonomic motion system of a mobile robot designed to operate in homes. The holonomic motion system is described in terms of mechanical design and electronic control. The paper analyzes the kinematics of the motion system and validates the estimation of the trajectory comparing the displacement estimated with the internal odometry of the motors and the displacement estimated with a SLAM procedure based on LIDAR information. Results obtained in different experiments have shown a difference on less than 30 mm between the position estimated with the SLAM and odometry, and a difference in the angular orientation of the mobile robot lower than 5° in absolute displacements up to 1000 mm.",
"title": ""
}
] | scidocsrr |
56b71e2392afb3cf4b51cffa7fa02509 | Battery management system in the Bayesian paradigm: Part I: SOC estimation | [
{
"docid": "69f36a0f043d8966dbcd7fc2607d61f8",
"text": "This paper presents a method for modeling and estimation of the state of charge (SOC) of lithium-ion (Li-Ion) batteries using neural networks (NNs) and the extended Kalman filter (EKF). The NN is trained offline using the data collected from the battery-charging process. This network finds the model needed in the state-space equations of the EKF, where the state variables are the battery terminal voltage at the previous sample and the SOC at the present sample. Furthermore, the covariance matrix for the process noise in the EKF is estimated adaptively. The proposed method is implemented on a Li-Ion battery to estimate online the actual SOC of the battery. Experimental results show a good estimation of the SOC and fast convergence of the EKF state variables.",
"title": ""
},
{
"docid": "560a19017dcc240d48bb879c3165b3e1",
"text": "Battery management systems in hybrid electric vehicle battery packs must estimate values descriptive of the pack’s present operating condition. These include: battery state of charge, power fade, capacity fade, and instantaneous available power. The estimation mechanism must adapt to changing cell characteristics as cells age and therefore provide accurate estimates over the lifetime of the pack. In a series of three papers, we propose a method, based on extended Kalman filtering (EKF), that is able to accomplish these goals on a lithium ion polymer battery pack. We expect that it will also work well on other battery chemistries. These papers cover the required mathematical background, cell modeling and system identification requirements, and the final solution, together with results. In order to use EKF to estimate the desired quantities, we first require a mathematical model that can accurately capture the dynamics of a cell. In this paper we “evolve” a suitable model from one that is very primitive to one that is more advanced and works well in practice. The final model includes terms that describe the dynamic contributions due to open-circuit voltage, ohmic loss, polarization time constants, electro-chemical hysteresis, and the effects of temperature. We also give a means, based on EKF, whereby the constant model parameters may be determined from cell test data. Results are presented that demonstrate it is possible to achieve root-mean-squared modeling error smaller than the level of quantization error expected in an implementation. © 2004 Elsevier B.V. All rights reserved.",
"title": ""
}
] | [
{
"docid": "f8ec5289b43504fcc96b9280ce7ce67d",
"text": "This study examined how scaffolds and student achievement levels influence inquiry and performance in a problem-based learning environment. The scaffolds were embedded within a hypermedia program that placed students at the center of a problem in which they were trying to become the youngest person to fly around the world in a balloon. One-hundred and eleven seventh grade students enrolled in a science and technology course worked in collaborative groups for a duration of 3 weeks to complete a project that included designing a balloon and a travel plan. Student groups used one of three problem-based, hypermedia programs: (1) a no scaffolding condition that did not provide access to scaffolds, (2) a scaffolding optional condition that provided access to scaffolds, but gave students the choice of whether or not to use them, and (3) a scaffolding required condition required students to complete all available scaffolds. Results revealed that students in the scaffolding optional and scaffolding required conditions performed significantly better than students in the no scaffolding condition on one of the two components of the group project. Results also showed that student achievement levels were significantly related to individual posttest scores; higherachieving students scored better on the posttest than lower-achieving students. In addition, analyses of group notebooks confirmed qualitative differences between students in the various conditions. Specifically, those in the scaffolding required condition produced more highly organized project notebooks containing a higher percentage of entries directly relevant to the problem. These findings suggest that scaffolds may enhance inquiry and performance, especially when students are required to access and",
"title": ""
},
{
"docid": "c88f5359fc6dc0cac2c0bd53cea989ee",
"text": "Automatic detection and monitoring of oil spills and illegal oil discharges is of fundamental importance in ensuring compliance with marine legislation and protection of the coastal environments, which are under considerable threat from intentional or accidental oil spills, uncontrolled sewage and wastewater discharged. In this paper the level set based image segmentation was evaluated for the real-time detection and tracking of oil spills from SAR imagery. The developed processing scheme consists of a preprocessing step, in which an advanced image simplification is taking place, followed by a geometric level set segmentation for the detection of the possible oil spills. Finally a classification was performed, for the separation of lookalikes, leading to oil spill extraction. Experimental results demonstrate that the level set segmentation is a robust tool for the detection of possible oil spills, copes well with abrupt shape deformations and splits and outperforms earlier efforts which were based on different types of threshold or edge detection techniques. The developed algorithm’s efficiency for real-time oil spill detection and monitoring was also tested.",
"title": ""
},
{
"docid": "edeefde21bbe1ace9a34a0ebe7bc6864",
"text": "Social media platforms provide active communication channels during mass convergence and emergency events such as disasters caused by natural hazards. As a result, first responders, decision makers, and the public can use this information to gain insight into the situation as it unfolds. In particular, many social media messages communicated during emergencies convey timely, actionable information. Processing social media messages to obtain such information, however, involves solving multiple challenges including: parsing brief and informal messages, handling information overload, and prioritizing different types of information found in messages. These challenges can be mapped to classical information processing operations such as filtering, classifying, ranking, aggregating, extracting, and summarizing. We survey the state of the art regarding computational methods to process social media messages and highlight both their contributions and shortcomings. In addition, we examine their particularities, and methodically examine a series of key subproblems ranging from the detection of events to the creation of actionable and useful summaries. Research thus far has, to a large extent, produced methods to extract situational awareness information from social media. In this survey, we cover these various approaches, and highlight their benefits and shortcomings. We conclude with research challenges that go beyond situational awareness, and begin to look at supporting decision making and coordinating emergency-response actions.",
"title": ""
},
{
"docid": "6b57c73406000ca0683b275c7e164c24",
"text": "In this letter, a novel compact and broadband integrated transition between a laminated waveguide and an air-filled rectangular waveguide operating in Ka band is proposed. A three-pole filter equivalent circuit model is employed to interpret the working mechanism and to predict the performance of the transition. A back-to-back prototype of the proposed transition is designed and fabricated for proving the concept. Good agreement of the measured and simulated results is obtained. The measured result shows that the insertion loss of better than 0.26 dB from 34.8 to 37.8 GHz can be achieved.",
"title": ""
},
{
"docid": "95a58a9fa31373296af2c41e47fa0884",
"text": "Force.com is the preeminent on-demand application development platform in use today, supporting some 55,000+ organizations. Individual enterprises and commercial software-as-a-service (SaaS) vendors trust the platform to deliver robust, reliable, Internet-scale applications. To meet the extreme demands of its large user population, Force.com's foundation is a metadatadriven software architecture that enables multitenant applications.\n The focus of this paper is multitenancy, a fundamental design approach that can dramatically improve SaaS application management. This paper defines multitenancy, explains its benefits, and demonstrates why metadata-driven architectures are the premier choice for implementing multitenancy.",
"title": ""
},
{
"docid": "c69e805751421b516e084498e7fc6f44",
"text": "We investigate two extremal problems for polynomials giving upper bounds for spherical codes and for polynomials giving lower bounds for spherical designs, respectively. We consider two basic properties of the solutions of these problems. Namely, we estimate from below the number of double zeros and find zero Gegenbauer coefficients of extremal polynomials. Our results allow us to search effectively for such solutions using a computer. The best polynomials we have obtained give substantial improvements in some cases on the previously known bounds for spherical codes and designs. Some examples are given in Section 6.",
"title": ""
},
{
"docid": "0f9ef379901c686df08dd0d1bb187e22",
"text": "This paper studies the minimum achievable source coding rate as a function of blocklength <i>n</i> and probability ϵ that the distortion exceeds a given level <i>d</i> . Tight general achievability and converse bounds are derived that hold at arbitrary fixed blocklength. For stationary memoryless sources with separable distortion, the minimum rate achievable is shown to be closely approximated by <i>R</i>(<i>d</i>) + √<i>V</i>(<i>d</i>)/(<i>n</i>) <i>Q</i><sup>-1</sup>(ϵ), where <i>R</i>(<i>d</i>) is the rate-distortion function, <i>V</i>(<i>d</i>) is the rate dispersion, a characteristic of the source which measures its stochastic variability, and <i>Q</i><sup>-1</sup>(·) is the inverse of the standard Gaussian complementary cumulative distribution function.",
"title": ""
},
{
"docid": "ed98eb7aa069c00e2be8a27ef889b623",
"text": "The class imbalance problem has been known to hinder the learning performance of classification algorithms. Various real-world classification tasks such as text categorization suffer from this phenomenon. We demonstrate that active learning is capable of solving the problem.",
"title": ""
},
{
"docid": "8af7826c809eb3941c2e394899ca83ef",
"text": "The development of interactive rehabilitation technologies which rely on wearable-sensing for upper body rehabilitation is attracting increasing research interest. This paper reviews related research with the aim: 1) To inventory and classify interactive wearable systems for movement and posture monitoring during upper body rehabilitation, regarding the sensing technology, system measurements and feedback conditions; 2) To gauge the wearability of the wearable systems; 3) To inventory the availability of clinical evidence supporting the effectiveness of related technologies. A systematic literature search was conducted in the following search engines: PubMed, ACM, Scopus and IEEE (January 2010–April 2016). Forty-five papers were included and discussed in a new cuboid taxonomy which consists of 3 dimensions: sensing technology, feedback modalities and system measurements. Wearable sensor systems were developed for persons in: 1) Neuro-rehabilitation: stroke (n = 21), spinal cord injury (n = 1), cerebral palsy (n = 2), Alzheimer (n = 1); 2) Musculoskeletal impairment: ligament rehabilitation (n = 1), arthritis (n = 1), frozen shoulder (n = 1), bones trauma (n = 1); 3) Others: chronic pulmonary obstructive disease (n = 1), chronic pain rehabilitation (n = 1) and other general rehabilitation (n = 14). Accelerometers and inertial measurement units (IMU) are the most frequently used technologies (84% of the papers). They are mostly used in multiple sensor configurations to measure upper limb kinematics and/or trunk posture. Sensors are placed mostly on the trunk, upper arm, the forearm, the wrist, and the finger. Typically sensors are attachable rather than embedded in wearable devices and garments; although studies that embed and integrate sensors are increasing in the last 4 years. 16 studies applied knowledge of result (KR) feedback, 14 studies applied knowledge of performance (KP) feedback and 15 studies applied both in various modalities. 16 studies have conducted their evaluation with patients and reported usability tests, while only three of them conducted clinical trials including one randomized clinical trial. This review has shown that wearable systems are used mostly for the monitoring and provision of feedback on posture and upper extremity movements in stroke rehabilitation. The results indicated that accelerometers and IMUs are the most frequently used sensors, in most cases attached to the body through ad hoc contraptions for the purpose of improving range of motion and movement performance during upper body rehabilitation. Systems featuring sensors embedded in wearable appliances or garments are only beginning to emerge. Similarly, clinical evaluations are scarce and are further needed to provide evidence on effectiveness and pave the path towards implementation in clinical settings.",
"title": ""
},
{
"docid": "c5dee985cbfd6c22beca6e2dad895efa",
"text": "Recently, convolutional neural networks (CNNs) have been used as a powerful tool to solve many problems of machine learning and computer vision. In this paper, we aim to provide insight on the property of convolutional neural networks, as well as a generic method to improve the performance of many CNN architectures. Specifically, we first examine existing CNN models and observe an intriguing property that the filters in the lower layers form pairs (i.e., filters with opposite phase). Inspired by our observation, we propose a novel, simple yet effective activation scheme called concatenated ReLU (CRelu) and theoretically analyze its reconstruction property in CNNs. We integrate CRelu into several state-of-the-art CNN architectures and demonstrate improvement in their recognition performance on CIFAR-10/100 and ImageNet datasets with fewer trainable parameters. Our results suggest that better understanding of the properties of CNNs can lead to significant performance improvement with a simple modification.",
"title": ""
},
{
"docid": "1569bcea0c166d9bf2526789514609c5",
"text": "In this paper, we present the developmert and initial validation of a new self-report instrument, the Differentiation of Self Inventory (DSI). T. DSI represents the first attempt to create a multi-dimensional measure of differentiation based on Bowen Theory, focusing specifically on adults (ages 25 +), their current significant relationships, and their relations with families of origin. Principal components factor analysis on a sample of 313 normal adults (mean age = 36.8) suggested four dimensions: Emotional Reactivity, Reactive Distancing, Fusion with Parents, and \"I\" Position. Scales constructed from these factors were found to be moderately correlated in the expected direction, internally consistent, and significantly predictive of trait anxiety. The potential contribution of the DSI is discussed -for testing Bowen Theory, as a clinical assessment tool, and as an indicator of psychotherapeutic outcome.",
"title": ""
},
{
"docid": "d76980f3a0b4e0dab21583b75ee16318",
"text": "We present a gold standard annotation of syntactic dependencies in the English Web Treebank corpus using the Stanford Dependencies standard. This resource addresses the lack of a gold standard dependency treebank for English, as well as the limited availability of gold standard syntactic annotations for informal genres of English text. We also present experiments on the use of this resource, both for training dependency parsers and for evaluating dependency parsers like the one included as part of the Stanford Parser. We show that training a dependency parser on a mix of newswire and web data improves performance on that type of data without greatly hurting performance on newswire text, and therefore gold standard annotations for non-canonical text can be valuable for parsing in general. Furthermore, the systematic annotation effort has informed both the SD formalism and its implementation in the Stanford Parser’s dependency converter. In response to the challenges encountered by annotators in the EWT corpus, we revised and extended the Stanford Dependencies standard, and improved the Stanford Parser’s dependency converter.",
"title": ""
},
{
"docid": "1b646a8a45b65799bbf2e71108f420e0",
"text": "Dynamic Time Warping (DTW) is a distance measure that compares two time series after optimally aligning them. DTW is being used for decades in thousands of academic and industrial projects despite the very expensive computational complexity, O(n2). These applications include data mining, image processing, signal processing, robotics and computer graphics among many others. In spite of all this research effort, there are many myths and misunderstanding about DTW in the literature, for example \"it is too slow to be useful\" or \"the warping window size does not matter much.\" In this tutorial, we correct these misunderstandings and we summarize the research efforts in optimizing both the efficiency and effectiveness of both the basic DTW algorithm, and of the higher-level algorithms that exploit DTW such as similarity search, clustering and classification. We will discuss variants of DTW such as constrained DTW, multidimensional DTW and asynchronous DTW, and optimization techniques such as lower bounding, early abandoning, run-length encoding, bounded approximation and hardware optimization. We will discuss a multitude of application areas including physiological monitoring, social media mining, activity recognition and animal sound processing. The optimization techniques are generalizable to other domains on various data types and problems.",
"title": ""
},
{
"docid": "38d1e06642f12138f8b0a90deeb96979",
"text": "Research at the intersection of machine learning, programming languages, and software engineering has recently taken important steps in proposing learnable probabilistic models of source code that exploit the abundance of patterns of code. In this article, we survey this work. We contrast programming languages against natural languages and discuss how these similarities and differences drive the design of probabilistic models. We present a taxonomy based on the underlying design principles of each model and use it to navigate the literature. Then, we review how researchers have adapted these models to application areas and discuss cross-cutting and application-specific challenges and opportunities.",
"title": ""
},
{
"docid": "41c5dbb3e903c007ba4b8f37d40b06ef",
"text": "BACKGROUND\nMyocardial infarction (MI) can directly cause ischemic mitral regurgitation (IMR), which has been touted as an indicator of poor prognosis in acute and early phases after MI. However, in the chronic post-MI phase, prognostic implications of IMR presence and degree are poorly defined.\n\n\nMETHODS AND RESULTS\nWe analyzed 303 patients with previous (>16 days) Q-wave MI by ECG who underwent transthoracic echocardiography: 194 with IMR quantitatively assessed in routine practice and 109 without IMR matched for baseline age (71+/-11 versus 70+/-9 years, P=0.20), sex, and ejection fraction (EF, 33+/-14% versus 34+/-11%, P=0.14). In IMR patients, regurgitant volume (RVol) and effective regurgitant orifice (ERO) area were 36+/-24 mL/beat and 21+/-12 mm(2), respectively. After 5 years, total mortality and cardiac mortality for patients with IMR (62+/-5% and 50+/-6%, respectively) were higher than for those without IMR (39+/-6% and 30+/-5%, respectively) (both P<0.001). In multivariate analysis, independently of all baseline characteristics, particularly age and EF, the adjusted relative risks of total and cardiac mortality associated with the presence of IMR (1.88, P=0.003 and 1.83, P=0.014, respectively) and quantified degree of IMR defined by RVol >/=30 mL (2.05, P=0.002 and 2.01, P=0.009) and by ERO >/=20 mm(2) (2.23, P=0.003 and 2.38, P=0.004) were high.\n\n\nCONCLUSIONS\nIn the chronic phase after MI, IMR presence is associated with excess mortality independently of baseline characteristics and degree of ventricular dysfunction. The mortality risk is related directly to the degree of IMR as defined by ERO and RVol. Therefore, IMR detection and quantification provide major information for risk stratification and clinical decision making in the chronic post-MI phase.",
"title": ""
},
{
"docid": "65eb604a2d45f29923ba24976130adc1",
"text": "The recognition of boundaries, e.g., between chorus and verse, is an important task in music structure analysis. The goal is to automatically detect such boundaries in audio signals so that the results are close to human annotation. In this work, we apply Convolutional Neural Networks to the task, trained directly on mel-scaled magnitude spectrograms. On a representative subset of the SALAMI structural annotation dataset, our method outperforms current techniques in terms of boundary retrieval F -measure at different temporal tolerances: We advance the state-of-the-art from 0.33 to 0.46 for tolerances of±0.5 seconds, and from 0.52 to 0.62 for tolerances of ±3 seconds. As the algorithm is trained on annotated audio data without the need of expert knowledge, we expect it to be easily adaptable to changed annotation guidelines and also to related tasks such as the detection of song transitions.",
"title": ""
},
{
"docid": "29fa75e49d4179072ec25b8aab6b48e2",
"text": "We describe the design, development, and API for two discourse parsers for Rhetorical Structure Theory. The two parsers use the same underlying framework, but one uses features that rely on dependency syntax, produced by a fast shift-reduce parser, whereas the other uses a richer feature space, including both constituentand dependency-syntax and coreference information, produced by the Stanford CoreNLP toolkit. Both parsers obtain state-of-the-art performance, and use a very simple API consisting of, minimally, two lines of Scala code. We accompany this code with a visualization library that runs the two parsers in parallel, and displays the two generated discourse trees side by side, which provides an intuitive way of comparing the two parsers.",
"title": ""
},
{
"docid": "343ba137056cac30d0d37e17a425d53b",
"text": "This thesis explores fundamental improvements in unsupervised deep learning algorithms. Taking a theoretical perspective on the purpose of unsupervised learning, and choosing learnt approximate inference in a jointly learnt directed generative model as the approach, the main question is how existing implementations of this approach, in particular auto-encoders, could be improved by simultaneously rethinking the way they learn and the way they perform inference. In such network architectures, the availability of two opposing pathways, one for inference and one for generation, allows to exploit the symmetry between them and to let either provide feedback signals to the other. The signals can be used to determine helpful updates for the connection weights from only locally available information, removing the need for the conventional back-propagation path and mitigating the issues associated with it. Moreover, feedback loops can be added to the usual usual feed-forward network to improve inference itself. The reciprocal connectivity between regions in the brain’s neocortex provides inspiration for how the iterative revision and verification of proposed interpretations could result in a fair approximation to optimal Bayesian inference. While extracting and combining underlying ideas from research in deep learning and cortical functioning, this thesis walks through the concepts of generative models, approximate inference, local learning rules, target propagation, recirculation, lateral and biased competition, predictive coding, iterative and amortised inference, and other related topics, in an attempt to build up a complex of insights that could provide direction to future research in unsupervised deep learning methods.",
"title": ""
},
{
"docid": "d6ea13f26642dfcb28b63ff43a0b39e1",
"text": "This paper deals with the inter-turn short circuit fault analysis of Pulse Width Modulated (PWM) inverter fed three-phase Induction Motor (IM) using Finite Element Method (FEM). The short circuit in the stator winding of a 3-phase IM start with an inter-turn fault and if left undetected it progresses to a phase-phase fault or phase-ground fault. In main fed IM a popular technique known as Motor Current Signature Analysis (MCSA) is used to detect the inter-turn fault. But if the machine is fed from PWM inverter MCSA fails, due to high frequency inverter switching, the current spectrum will be rich in noise causing the fault detection difficult. An electromagnetic field analysis of inverter fed IM is carried out with 25% and 50% of stator winding inter-turn short circuit fault severity using FEM. The simulation is carried out on a 2.2kW IM using Ansys Maxwell Finite Element Analysis (FEA) tool. Comparisons are made on the various electromagnetic field parameters like flux lines distribution, flux density, radial air gap flux density between a healthy and faulty (25% & 50% severity) IM.",
"title": ""
},
{
"docid": "87c973e92ef3affcff4dac0d0183067c",
"text": "Drug-drug interaction (DDI) is a major cause of morbidity and mortality and a subject of intense scientific interest. Biomedical literature mining can aid DDI research by extracting evidence for large numbers of potential interactions from published literature and clinical databases. Though DDI is investigated in domains ranging in scale from intracellular biochemistry to human populations, literature mining has not been used to extract specific types of experimental evidence, which are reported differently for distinct experimental goals. We focus on pharmacokinetic evidence for DDI, essential for identifying causal mechanisms of putative interactions and as input for further pharmacological and pharmacoepidemiology investigations. We used manually curated corpora of PubMed abstracts and annotated sentences to evaluate the efficacy of literature mining on two tasks: first, identifying PubMed abstracts containing pharmacokinetic evidence of DDIs; second, extracting sentences containing such evidence from abstracts. We implemented a text mining pipeline and evaluated it using several linear classifiers and a variety of feature transforms. The most important textual features in the abstract and sentence classification tasks were analyzed. We also investigated the performance benefits of using features derived from PubMed metadata fields, various publicly available named entity recognizers, and pharmacokinetic dictionaries. Several classifiers performed very well in distinguishing relevant and irrelevant abstracts (reaching F1≈0.93, MCC≈0.74, iAUC≈0.99) and sentences (F1≈0.76, MCC≈0.65, iAUC≈0.83). We found that word bigram features were important for achieving optimal classifier performance and that features derived from Medical Subject Headings (MeSH) terms significantly improved abstract classification. We also found that some drug-related named entity recognition tools and dictionaries led to slight but significant improvements, especially in classification of evidence sentences. Based on our thorough analysis of classifiers and feature transforms and the high classification performance achieved, we demonstrate that literature mining can aid DDI discovery by supporting automatic extraction of specific types of experimental evidence.",
"title": ""
}
] | scidocsrr |
51921151c2e3c4b4fa039456a32f955f | A task-driven approach to time scale detection in dynamic networks | [
{
"docid": "b89a3bc8aa519ba1ccc818fe2a54b4ff",
"text": "We present the design, implementation, and deployment of a wearable computing platform for measuring and analyzing human behavior in organizational settings. We propose the use of wearable electronic badges capable of automatically measuring the amount of face-to-face interaction, conversational time, physical proximity to other people, and physical activity levels in order to capture individual and collective patterns of behavior. Our goal is to be able to understand how patterns of behavior shape individuals and organizations. By using on-body sensors in large groups of people for extended periods of time in naturalistic settings, we have been able to identify, measure, and quantify social interactions, group behavior, and organizational dynamics. We deployed this wearable computing platform in a group of 22 employees working in a real organization over a period of one month. Using these automatic measurements, we were able to predict employees' self-assessments of job satisfaction and their own perceptions of group interaction quality by combining data collected with our platform and e-mail communication data. In particular, the total amount of communication was predictive of both of these assessments, and betweenness in the social network exhibited a high negative correlation with group interaction satisfaction. We also found that physical proximity and e-mail exchange had a negative correlation of r = -0.55 (p 0.01), which has far-reaching implications for past and future research on social networks.",
"title": ""
},
{
"docid": "e4890b63e9a51029484354535765801c",
"text": "Many different machine learning algorithms exist; taking into account each algorithm's hyperparameters, there is a staggeringly large number of possible alternatives overall. We consider the problem of simultaneously selecting a learning algorithm and setting its hyperparameters, going beyond previous work that attacks these issues separately. We show that this problem can be addressed by a fully automated approach, leveraging recent innovations in Bayesian optimization. Specifically, we consider a wide range of feature selection techniques (combining 3 search and 8 evaluator methods) and all classification approaches implemented in WEKA's standard distribution, spanning 2 ensemble methods, 10 meta-methods, 27 base classifiers, and hyperparameter settings for each classifier. On each of 21 popular datasets from the UCI repository, the KDD Cup 09, variants of the MNIST dataset and CIFAR-10, we show classification performance often much better than using standard selection and hyperparameter optimization methods. We hope that our approach will help non-expert users to more effectively identify machine learning algorithms and hyperparameter settings appropriate to their applications, and hence to achieve improved performance.",
"title": ""
}
] | [
{
"docid": "d02e87a00aaf29a86cf94ad0c539fd0d",
"text": "Future advanced driver assistance systems will contain multiple sensors that are used for several applications, such as highly automated driving on freeways. The problem is that the sensors are usually asynchronous and their data possibly out-of-sequence, making fusion of the sensor data non-trivial. This paper presents a novel approach to track-to-track fusion for automotive applications with asynchronous and out-of-sequence sensors using information matrix fusion. This approach solves the problem of correlation between sensor data due to the common process noise and common track history, which eliminates the need to replace the global track estimate with the fused local estimate at each fusion cycle. The information matrix fusion approach is evaluated in simulation and its performance demonstrated using real sensor data on a test vehicle designed for highly automated driving on freeways.",
"title": ""
},
{
"docid": "8972e89b0b06bf25e72f8cb82b6d629a",
"text": "Community detection is an important task for mining the structure and function of complex networks. Generally, there are several different kinds of nodes in a network which are cluster nodes densely connected within communities, as well as some special nodes like hubs bridging multiple communities and outliers marginally connected with a community. In addition, it has been shown that there is a hierarchical structure in complex networks with communities embedded within other communities. Therefore, a good algorithm is desirable to be able to not only detect hierarchical communities, but also identify hubs and outliers. In this paper, we propose a parameter-free hierarchical network clustering algorithm SHRINK by combining the advantages of density-based clustering and modularity optimization methods. Based on the structural connectivity information, the proposed algorithm can effectively reveal the embedded hierarchical community structure with multiresolution in large-scale weighted undirected networks, and identify hubs and outliers as well. Moreover, it overcomes the sensitive threshold problem of density-based clustering algorithms and the resolution limit possessed by other modularity-based methods. To illustrate our methodology, we conduct experiments with both real-world and synthetic datasets for community detection, and compare with many other baseline methods. Experimental results demonstrate that SHRINK achieves the best performance with consistent improvements.",
"title": ""
},
{
"docid": "5c32b7bea7470a50a900a62e1a3dffc3",
"text": "Recommender systems (RSs) have been the most important technology for increasing the business in Taobao, the largest online consumer-to-consumer (C2C) platform in China. There are three major challenges facing RS in Taobao: scalability, sparsity and cold start. In this paper, we present our technical solutions to address these three challenges. The methods are based on a well-known graph embedding framework. We first construct an item graph from users' behavior history, and learn the embeddings of all items in the graph. The item embeddings are employed to compute pairwise similarities between all items, which are then used in the recommendation process. To alleviate the sparsity and cold start problems, side information is incorporated into the graph embedding framework. We propose two aggregation methods to integrate the embeddings of items and the corresponding side information. Experimental results from offline experiments show that methods incorporating side information are superior to those that do not. Further, we describe the platform upon which the embedding methods are deployed and the workflow to process the billion-scale data in Taobao. Using A/B test, we show that the online Click-Through-Rates (CTRs) are improved comparing to the previous collaborative filtering based methods widely used in Taobao, further demonstrating the effectiveness and feasibility of our proposed methods in Taobao's live production environment.",
"title": ""
},
{
"docid": "e8c6cdc70be62c6da150b48ba69c0541",
"text": "Stress granules and processing bodies are related mRNA-containing granules implicated in controlling mRNA translation and decay. A genomic screen identifies numerous factors affecting granule formation, including proteins involved in O-GlcNAc modifications. These results highlight the importance of post-translational modifications in translational control and mRNP granule formation.",
"title": ""
},
{
"docid": "8c0a8816028e8c50ebccbd812ee3a4e5",
"text": "Songs are representation of audio signal and musical instruments. An audio signal separation system should be able to identify different audio signals such as speech, background noise and music. In a song the singing voice provides useful information regarding pitch range, music content, music tempo and rhythm. An automatic singing voice separation system is used for attenuating or removing the music accompaniment. The paper presents survey of the various algorithm and method for separating singing voice from musical background. From the survey it is observed that most of researchers used Robust Principal Component Analysis method for separation of singing voice from music background, by taking into account the rank of music accompaniment and the sparsity of singing voices.",
"title": ""
},
{
"docid": "8f1d27581e7a83e378129e4287c64bd9",
"text": "Online social media plays an increasingly significant role in shaping the political discourse during elections worldwide. In the 2016 U.S. presidential election, political campaigns strategically designed candidacy announcements on Twitter to produce a significant increase in online social media attention. We use large-scale online social media communications to study the factors of party, personality, and policy in the Twitter discourse following six major presidential campaign announcements for the 2016 U.S. presidential election. We observe that all campaign announcements result in an instant bump in attention, with up to several orders of magnitude increase in tweets. However, we find that Twitter discourse as a result of this bump in attention has overwhelmingly negative sentiment. The bruising criticism, driven by crosstalk from Twitter users of opposite party affiliations, is organized by hashtags such as #NoMoreBushes and #WhyImNotVotingForHillary. We analyze how people take to Twitter to criticize specific personality traits and policy positions of presidential candidates.",
"title": ""
},
{
"docid": "76d260180b588f881f1009a420a35b3b",
"text": "Appearance changes due to weather and seasonal conditions represent a strong impediment to the robust implementation of machine learning systems in outdoor robotics. While supervised learning optimises a model for the training domain, it will deliver degraded performance in application domains that underlie distributional shifts caused by these changes. Traditionally, this problem has been addressed via the collection of labelled data in multiple domains or by imposing priors on the type of shift between both domains. We frame the problem in the context of unsupervised domain adaptation and develop a framework for applying adversarial techniques to adapt popular, state-of-the-art network architectures with the additional objective to align features across domains. Moreover, as adversarial training is notoriously unstable, we first perform an extensive ablation study, adapting many techniques known to stabilise generative adversarial networks, and evaluate on a surrogate classification task with the same appearance change. The distilled insights are applied to the problem of free-space segmentation for motion planning in autonomous driving.",
"title": ""
},
{
"docid": "49b0cf976357d0c943ff003526ffff1f",
"text": "Transcranial direct current stimulation (tDCS) is a promising tool for neurocognitive enhancement. Several studies have shown that just a single session of tDCS over the left dorsolateral pFC (lDLPFC) can improve the core cognitive function of working memory (WM) in healthy adults. Yet, recent studies combining multiple sessions of anodal tDCS over lDLPFC with verbal WM training did not observe additional benefits of tDCS in subsequent stimulation sessions nor transfer of benefits to novel WM tasks posttraining. Using an enhanced stimulation protocol as well as a design that included a baseline measure each day, the current study aimed to further investigate the effects of multiple sessions of tDCS on WM. Specifically, we investigated the effects of three subsequent days of stimulation with anodal (20 min, 1 mA) versus sham tDCS (1 min, 1 mA) over lDLPFC (with a right supraorbital reference) paired with a challenging verbal WM task. WM performance was measured with a verbal WM updating task (the letter n-back) in the stimulation sessions and several WM transfer tasks (different letter set n-back, spatial n-back, operation span) before and 2 days after stimulation. Anodal tDCS over lDLPFC enhanced WM performance in the first stimulation session, an effect that remained visible 24 hr later. However, no further gains of anodal tDCS were observed in the second and third stimulation sessions, nor did benefits transfer to other WM tasks at the group level. Yet, interestingly, post hoc individual difference analyses revealed that in the anodal stimulation group the extent of change in WM performance on the first day of stimulation predicted pre to post changes on both the verbal and the spatial transfer task. Notably, this relationship was not observed in the sham group. Performance of two individuals worsened during anodal stimulation and on the transfer tasks. Together, these findings suggest that repeated anodal tDCS over lDLPFC combined with a challenging WM task may be an effective method to enhance domain-independent WM functioning in some individuals, but not others, or can even impair WM. They thus call for a thorough investigation into individual differences in tDCS respondence as well as further research into the design of multisession tDCS protocols that may be optimal for boosting cognition across a wide range of individuals.",
"title": ""
},
{
"docid": "300485eefc3020135cdaa31ad36f7462",
"text": "The number of cyber threats is constantly increasing. In 2013, 200,000 malicious tools were identified each day by antivirus vendors. This figure rose to 800,000 per day in 2014 and then to 1.8 million per day in 2016! The bar of 3 million per day will be crossed in 2017. Traditional security tools (mainly signature-based) show their limits and are less and less effective to detect these new cyber threats. Detecting never-seen-before or zero-day malware, including ransomware, efficiently requires a new approach in cyber security management. This requires a move from signature-based detection to behavior-based detection. We have developed a data breach detection system named CDS using Machine Learning techniques which is able to identify zero-day malware by analyzing the network traffic. In this paper, we present the capability of the CDS to detect zero-day ransomware, particularly WannaCry.",
"title": ""
},
{
"docid": "ad4c9b26e0273ada7236068fb8ac4729",
"text": "Understanding user participation is fundamental in anticipating the popularity of online content. In this paper, we explore how the number of users' comments during a short observation period after publication can be used to predict the expected popularity of articles published by a countrywide online newspaper. We evaluate a simple linear prediction model on a real dataset of hundreds of thousands of articles and several millions of comments collected over a period of four years. Analyzing the accuracy of our proposed model for different values of its basic parameters we provide valuable insights on the potentials and limitations for predicting content popularity based on early user activity.",
"title": ""
},
{
"docid": "f55e380c158ae01812f009fd81642d7f",
"text": "In this paper, we proposed a system to effectively create music mashups – a kind of re-created music that is made by mixing parts of multiple existing music pieces. Unlike previous studies which merely generate mashups by overlaying music segments on one single base track, the proposed system creates mashups with multiple background (e.g. instrumental) and lead (e.g. vocal) track segments. So, besides the suitability between the vertically overlaid tracks (i.e. vertical mashability) used in previous studies, we proposed to further consider the suitability between the horizontally connected consecutive music segments (i.e. horizontal mashability) when searching for proper music segments to be combined. On the vertical side, two new factors: “harmonic change balance” and “volume weight” have been considered. On the horizontal side, the methods used in the studies of medley creation are incorporated. Combining vertical and horizontal mashabilities together, we defined four levels of mashability that may be encountered and found the proper solution to each of them. Subjective evaluations showed that the proposed four levels of mashability can appropriately reflect the degrees of listening enjoyment. Besides, by taking the newly proposed vertical mashability measurement into account, the improvement in user satisfaction is statistically significant.",
"title": ""
},
{
"docid": "6c149f1f6e9dc859bf823679df175afb",
"text": "Neurofeedback is attracting renewed interest as a method to self-regulate one's own brain activity to directly alter the underlying neural mechanisms of cognition and behavior. It not only promises new avenues as a method for cognitive enhancement in healthy subjects, but also as a therapeutic tool. In the current article, we present a review tutorial discussing key aspects relevant to the development of electroencephalography (EEG) neurofeedback studies. In addition, the putative mechanisms underlying neurofeedback learning are considered. We highlight both aspects relevant for the practical application of neurofeedback as well as rather theoretical considerations related to the development of new generation protocols. Important characteristics regarding the set-up of a neurofeedback protocol are outlined in a step-by-step way. All these practical and theoretical considerations are illustrated based on a protocol and results of a frontal-midline theta up-regulation training for the improvement of executive functions. Not least, assessment criteria for the validation of neurofeedback studies as well as general guidelines for the evaluation of training efficacy are discussed.",
"title": ""
},
{
"docid": "6982c79b6fa2cda4f0323421f8e3b4be",
"text": "We propose split-brain autoencoders, a straightforward modification of the traditional autoencoder architecture, for unsupervised representation learning. The method adds a split to the network, resulting in two disjoint sub-networks. Each sub-network is trained to perform a difficult task – predicting one subset of the data channels from another. Together, the sub-networks extract features from the entire input signal. By forcing the network to solve cross-channel prediction tasks, we induce a representation within the network which transfers well to other, unseen tasks. This method achieves state-of-the-art performance on several large-scale transfer learning benchmarks.",
"title": ""
},
{
"docid": "f7a1eaa86a81b104a9ae62dc87c495aa",
"text": "In the Internet of Things, the extreme heterogeneity of sensors, actuators and user devices calls for new tools and design models able to translate the user's needs in machine-understandable scenarios. The scientific community has proposed different solution for such issue, e.g., the MQTT (MQ Telemetry Transport) protocol introduced the topic concept as “the key that identifies the information channel to which payload data is published”. This study extends the topic approach by proposing the Web of Topics (WoX), a conceptual model for the IoT. A WoX Topic is identified by two coordinates: (i) a discrete semantic feature of interest (e.g. temperature, humidity), and (ii) a URI-based location. An IoT entity defines its role within a Topic by specifying its technological and collaborative dimensions. By this approach, it is easier to define an IoT entity as a set of couples Topic-Role. In order to prove the effectiveness of the WoX approach, we developed the WoX APIs on top of an EPCglobal implementation. Then, 10 developers were asked to build a WoX-based application supporting a physics lab scenario at school. They also filled out an ex-ante and an ex-post questionnaire. A set of qualitative and quantitative metrics allowed measuring the model's outcome.",
"title": ""
},
{
"docid": "645f49ff21d31bb99cce9f05449df0d7",
"text": "The growing popularity of the JSON format has fueled increased interest in loading and processing JSON data within analytical data processing systems. However, in many applications, JSON parsing dominates performance and cost. In this paper, we present a new JSON parser called Mison that is particularly tailored to this class of applications, by pushing down both projection and filter operators of analytical queries into the parser. To achieve these features, we propose to deviate from the traditional approach of building parsers using finite state machines (FSMs). Instead, we follow a two-level approach that enables the parser to jump directly to the correct position of a queried field without having to perform expensive tokenizing steps to find the field. At the upper level, Mison speculatively predicts the logical locations of queried fields based on previously seen patterns in a dataset. At the lower level, Mison builds structural indices on JSON data to map logical locations to physical locations. Unlike all existing FSM-based parsers, building structural indices converts control flow into data flow, thereby largely eliminating inherently unpredictable branches in the program and exploiting the parallelism available in modern processors. We experimentally evaluate Mison using representative real-world JSON datasets and the TPC-H benchmark, and show that Mison produces significant performance benefits over the best existing JSON parsers; in some cases, the performance improvement is over one order of magnitude.",
"title": ""
},
{
"docid": "9dac75a40e421163c4e05cfd5d36361f",
"text": "In recent years, many data mining methods have been proposed for finding useful and structured information from market basket data. The association rule model was recently proposed in order to discover useful patterns and dependencies in such data. This paper discusses a method for indexing market basket data efficiently for similarity search. The technique is likely to be very useful in applications which utilize the similarity in customer buying behavior in order to make peer recommendations. We propose an index called the signature table, which is very flexible in supporting a wide range of similarity functions. The construction of the index structure is independent of the similarity function, which can be specified at query time. The resulting similarity search algorithm shows excellent scalability with increasing memory availability and database size.",
"title": ""
},
{
"docid": "29ac2afc399bbf61927c4821d3a6e0a0",
"text": "A well used approach for echo cancellation is the two-path method, where two adaptive filters in parallel are utilized. Typically, one filter is continuously updated, and when this filter is considered better adjusted to the echo-path than the other filter, the coefficients of the better adjusted filter is transferred to the other filter. When this transfer should occur is controlled by the transfer logic. This paper proposes transfer logic that is both more robust and more simple to tune, owing to fewer parameters, than the conventional approach. Extensive simulations show the advantages of the proposed method.",
"title": ""
},
{
"docid": "510439267c11c53b31dcf0b1c40e331b",
"text": "Spatial multicriteria decision problems are decision problems where one needs to take multiple conflicting criteria as well as geographical knowledge into account. In such a context, exploratory spatial analysis is known to provide tools to visualize as much data as possible on maps but does not integrate multicriteria aspects. Also, none of the tools provided by multicriteria analysis were initially destined to be used in a geographical context.In this paper, we propose an application of the PROMETHEE and GAIA ranking methods to Geographical Information Systems (GIS). The aim is to help decision makers obtain rankings of geographical entities and understand why such rankings have been obtained. To do that, we make use of the visual approach of the GAIA method and adapt it to display the results on geographical maps. This approach is then extended to cover several weaknesses of the adaptation. Finally, it is applied to a study of the region of Brussels as well as an evaluation of the Human Development Index (HDI) in Europe.",
"title": ""
},
{
"docid": "283708fe3c950ac08bf932d68feb6d56",
"text": "Diabetic wounds are unlike typical wounds in that they are slower to heal, making treatment with conventional topical medications an uphill process. Among several different alternative therapies, honey is an effective choice because it provides comparatively rapid wound healing. Although honey has been used as an alternative medicine for wound healing since ancient times, the application of honey to diabetic wounds has only recently been revived. Because honey has some unique natural features as a wound healer, it works even more effectively on diabetic wounds than on normal wounds. In addition, honey is known as an \"all in one\" remedy for diabetic wound healing because it can combat many microorganisms that are involved in the wound process and because it possesses antioxidant activity and controls inflammation. In this review, the potential role of honey's antibacterial activity on diabetic wound-related microorganisms and honey's clinical effectiveness in treating diabetic wounds based on the most recent studies is described. Additionally, ways in which honey can be used as a safer, faster, and effective healing agent for diabetic wounds in comparison with other synthetic medications in terms of microbial resistance and treatment costs are also described to support its traditional claims.",
"title": ""
},
{
"docid": "df6e410fddeb22c7856f5362b7abc1de",
"text": "With the increasing prevalence of Web 2.0 and cloud computing, password-based logins play an increasingly important role on user-end systems. We use passwords to authenticate ourselves to countless applications and services. However, login credentials can be easily stolen by attackers. In this paper, we present a framework, TrustLogin, to secure password-based logins on commodity operating systems. TrustLogin leverages System Management Mode to protect the login credentials from malware even when OS is compromised. TrustLogin does not modify any system software in either client or server and is transparent to users, applications, and servers. We conduct two study cases of the framework on legacy and secure applications, and the experimental results demonstrate that TrustLogin is able to protect login credentials from real-world keyloggers on Windows and Linux platforms. TrustLogin is robust against spoofing attacks. Moreover, the experimental results also show TrustLogin introduces a low overhead with the tested applications.",
"title": ""
}
] | scidocsrr |
519cad491c492024d286bfcba25e17a6 | A Heuristics Approach for Fast Detecting Suspicious Money Laundering Cases in an Investment Bank | [
{
"docid": "e67dc912381ebbae34d16aad0d3e7d92",
"text": "In this paper, we study the problem of applying data mining to facilitate the investigation of money laundering crimes (MLCs). We have identified a new paradigm of problems --- that of automatic community generation based on uni-party data, the data in which there is no direct or explicit link information available. Consequently, we have proposed a new methodology for Link Discovery based on Correlation Analysis (LDCA). We have used MLC group model generation as an exemplary application of this problem paradigm, and have focused on this application to develop a specific method of automatic MLC group model generation based on timeline analysis using the LDCA methodology, called CORAL. A prototype of CORAL method has been implemented, and preliminary testing and evaluations based on a real MLC case data are reported. The contributions of this work are: (1) identification of the uni-party data community generation problem paradigm, (2) proposal of a new methodology LDCA to solve for problems in this paradigm, (3) formulation of the MLC group model generation problem as an example of this paradigm, (4) application of the LDCA methodology in developing a specific solution (CORAL) to the MLC group model generation problem, and (5) development, evaluation, and testing of the CORAL prototype in a real MLC case data.",
"title": ""
},
{
"docid": "0a0f4f5fc904c12cacb95e87f62005d0",
"text": "This text is intended to provide a balanced introduction to machine vision. Basic concepts are introduced with only essential mathematical elements. The details to allow implementation and use of vision algorithm in practical application are provided, and engineering aspects of techniques are emphasized. This text intentionally omits theories of machine vision that do not have sufficient practical applications at the time.",
"title": ""
}
] | [
{
"docid": "5666b1a6289f4eac05531b8ff78755cb",
"text": "Neural text generation models are often autoregressive language models or seq2seq models. These models generate text by sampling words sequentially, with each word conditioned on the previous word, and are state-of-the-art for several machine translation and summarization benchmarks. These benchmarks are often defined by validation perplexity even though this is not a direct measure of the quality of the generated text. Additionally, these models are typically trained via maximum likelihood and teacher forcing. These methods are well-suited to optimizing perplexity but can result in poor sample quality since generating text requires conditioning on sequences of words that may have never been observed at training time. We propose to improve sample quality using Generative Adversarial Networks (GANs), which explicitly train the generator to produce high quality samples and have shown a lot of success in image generation. GANs were originally designed to output differentiable values, so discrete language generation is challenging for them. We claim that validation perplexity alone is not indicative of the quality of text generated by a model. We introduce an actor-critic conditional GAN that fills in missing text conditioned on the surrounding context. We show qualitatively and quantitatively, evidence that this produces more realistic conditional and unconditional text samples compared to a maximum likelihood trained model.",
"title": ""
},
{
"docid": "bfa178f35027a55e8fd35d1c87789808",
"text": "We present a generative model for the unsupervised learning of dependency structures. We also describe the multiplicative combination of this dependency model with a model of linear constituency. The product model outperforms both components on their respective evaluation metrics, giving the best published figures for unsupervised dependency parsing and unsupervised constituency parsing. We also demonstrate that the combined model works and is robust cross-linguistically, being able to exploit either attachment or distributional reg ularities that are salient in the data.",
"title": ""
},
{
"docid": "56cf91a279fdcee59841cb9b8c866626",
"text": "This paper describes a new maximum-power-point-tracking method for a photovoltaic system based on the Lagrange Interpolation Formula and proposes the particle swarm optimization method. The proposed control scheme eliminates the problems of conventional methods by using only a simple numerical calculation to initialize the particles around the global maximum power point. Hence, the suggested control scheme will utilize less iterations to reach the maximum power point. Simulation study is carried out using MATLAB/SIMULINK and compared with the Perturb and Observe method, the Incremental Conductance method, and the conventional Particle Swarm Optimization algorithm. The proposed algorithm is verified with the OPAL-RT real-time simulator. The simulation results confirm that the proposed algorithm can effectively enhance the stability and the fast tracking capability under abnormal insolation conditions.",
"title": ""
},
{
"docid": "70d7c838e7b5c4318e8764edb5a70555",
"text": "This research developed and tested a model of turnover contagion in which the job embeddedness and job search behaviors of coworkers influence employees’ decisions to quit. In a sample of 45 branches of a regional bank and 1,038 departments of a national hospitality firm, multilevel analysis revealed that coworkers’ job embeddedness and job search behaviors explain variance in individual “voluntary turnover” over and above that explained by other individual and group-level predictors. Broadly speaking, these results suggest that coworkers’ job embeddedness and job search behaviors play critical roles in explaining why people quit their jobs. Implications are discussed.",
"title": ""
},
{
"docid": "9fab400cba6d9c91aba707c6952889f8",
"text": "Deep Neural Networks (DNNs) have recently been shown to be vulnerable against adversarial examples, which are carefully crafted instances that can mislead DNNs to make errors during prediction. To better understand such attacks, a characterization is needed of the properties of regions (the so-called ‘adversarial subspaces’) in which adversarial examples lie. We tackle this challenge by characterizing the dimensional properties of adversarial regions, via the use of Local Intrinsic Dimensionality (LID). LID assesses the space-filling capability of the region surrounding a reference example, based on the distance distribution of the example to its neighbors. We first provide explanations about how adversarial perturbation can affect the LID characteristic of adversarial regions, and then show empirically that LID characteristics can facilitate the distinction of adversarial examples generated using state-of-the-art attacks. As a proof-of-concept, we show that a potential application of LID is to distinguish adversarial examples, and the preliminary results show that it can outperform several state-of-the-art detection measures by large margins for five attack strategies considered in this paper across three benchmark datasets . Our analysis of the LID characteristic for adversarial regions not only motivates new directions of effective adversarial defense, but also opens up more challenges for developing new attacks to better understand the vulnerabilities of DNNs.",
"title": ""
},
{
"docid": "db1d87d3e5ab39ef639d7c53a740340a",
"text": "Plants are natural producers of chemical substances, providing potential treatment of human ailments since ancient times. Some herbal chemicals in medicinal plants of traditional and modern medicine carry the risk of herb induced liver injury (HILI) with a severe or potentially lethal clinical course, and the requirement of a liver transplant. Discontinuation of herbal use is mandatory in time when HILI is first suspected as diagnosis. Although, herbal hepatotoxicity is of utmost clinical and regulatory importance, lack of a stringent causality assessment remains a major issue for patients with suspected HILI, while this problem is best overcome by the use of the hepatotoxicity specific CIOMS (Council for International Organizations of Medical Sciences) scale and the evaluation of unintentional reexposure test results. Sixty five different commonly used herbs, herbal drugs, and herbal supplements and 111 different herbs or herbal mixtures of the traditional Chinese medicine (TCM) are reported causative for liver disease, with levels of causality proof that appear rarely conclusive. Encouraging steps in the field of herbal hepatotoxicity focus on introducing analytical methods that identify cases of intrinsic hepatotoxicity caused by pyrrolizidine alkaloids, and on omics technologies, including genomics, proteomics, metabolomics, and assessing circulating micro-RNA in the serum of some patients with intrinsic hepatotoxicity. It remains to be established whether these new technologies can identify idiosyncratic HILI cases. To enhance its globalization, herbal medicine should universally be marketed as herbal drugs under strict regulatory surveillance in analogy to regulatory approved chemical drugs, proving a positive risk/benefit profile by enforcing evidence based clinical trials and excellent herbal drug quality.",
"title": ""
},
{
"docid": "57290d8e0a236205c4f0ce887ffed3ab",
"text": "We propose a novel, projection based way to incorporate the conditional information into the discriminator of GANs that respects the role of the conditional information in the underlining probabilistic model. This approach is in contrast with most frameworks of conditional GANs used in application today, which use the conditional information by concatenating the (embedded) conditional vector to the feature vectors. With this modification, we were able to significantly improve the quality of the class conditional image generation on ILSVRC2012 (ImageNet) 1000-class image dataset from the current state-of-the-art result, and we achieved this with a single pair of a discriminator and a generator. We were also able to extend the application to super-resolution and succeeded in producing highly discriminative super-resolution images. This new structure also enabled high quality category transformation based on parametric functional transformation of conditional batch normalization layers in the generator. The code with Chainer (Tokui et al., 2015), generated images and pretrained models are available at https://github.com/pfnet-research/sngan_projection.",
"title": ""
},
{
"docid": "a6e2652aa074719ac2ca6e94d12fed03",
"text": "■ Lincoln Laboratory led the nation in the development of high-power wideband radar with a unique capability for resolving target scattering centers and producing three-dimensional images of individual targets. The Laboratory fielded the first wideband radar, called ALCOR, in 1970 at Kwajalein Atoll. Since 1970 the Laboratory has developed and fielded several other wideband radars for use in ballistic-missile-defense research and space-object identification. In parallel with these radar systems, the Laboratory has developed high-capacity, high-speed signal and data processing techniques and algorithms that permit generation of target images and derivation of other target features in near real time. It has also pioneered new ways to realize improved resolution and scatterer-feature identification in wideband radars by the development and application of advanced signal processing techniques. Through the analysis of dynamic target images and other wideband observables, we can acquire knowledge of target form, structure, materials, motion, mass distribution, identifying features, and function. Such capability is of great benefit in ballistic missile decoy discrimination and in space-object identification.",
"title": ""
},
{
"docid": "e82cd7c22668b0c9ed62b4afdf49d1f4",
"text": "This paper presents a tutorial on delta-sigma fractional-N PLLs for frequency synthesis. The presentation assumes the reader has a working knowledge of integer-N PLLs. It builds on this knowledge by introducing the additional concepts required to understand ΔΣ fractional-N PLLs. After explaining the limitations of integerN PLLs with respect to tuning resolution, the paper introduces the delta-sigma fractional-N PLL as a means of avoiding these limitations. It then presents a selfcontained explanation of the relevant aspects of deltasigma modulation, an extension of the well known integerN PLL linearized model to delta-sigma fractional-N PLLs, a design example, and techniques for wideband digital modulation of the VCO within a delta-sigma fractional-N PLL.",
"title": ""
},
{
"docid": "10d9758469a1843d426f56a379c2fecb",
"text": "A novel compact-size branch-line coupler using composite right/left-handed transmission lines is proposed in this paper. In order to obtain miniaturization, composite right/left-handed transmission lines with novel complementary split single ring resonators which are realized by loading a pair of meander-shaped-slots in the split of the ring are designed. This novel coupler occupies only 22.8% of the area of the conventional approach at 0.7 GHz. The proposed coupler can be implemented by using the standard printed-circuit-board etching processes without any implementation of lumped elements and via-holes, making it very useful for wireless communication systems. The agreement between measured and stimulated results validates the feasible configuration of the proposed coupler.",
"title": ""
},
{
"docid": "58858f0cd3561614f1742fe7b0380861",
"text": "This study focuses on how technology can encourage and ease awkwardness-free communications between people in real-world scenarios. We propose a device, The Wearable Aura, able to project a personalized animation onto one's Personal Distance zone. This projection, as an extension of one-self is reactive to user's cognitive status, aware of its environment, context and user's activity. Our user study supports the idea that an interactive projection around an individual can indeed benefit the communications with other individuals.",
"title": ""
},
{
"docid": "e5539337c36ec7a03bf327069156ea2c",
"text": "An approach is proposed to estimate the location, velocity, and acceleration of a target vehicle to avoid a possible collision. Radial distance, velocity, and acceleration are extracted from the hybrid linear frequency modulation (LFM)/frequency-shift keying (FSK) echoed signals and then processed using the Kalman filter and the trilateration process. This approach proves to converge fast with good accuracy. Two other approaches, i.e., an extended Kalman filter (EKF) and a two-stage Kalman filter (TSKF), are used as benchmarks for comparison. Several scenarios of vehicle movement are also presented to demonstrate the effectiveness of this approach.",
"title": ""
},
{
"docid": "1ad353e3d7765e1681c062c777087be7",
"text": "The cyber world provides an anonymous environment for criminals to conduct malicious activities such as spamming, sending ransom e-mails, and spreading botnet malware. Often, these activities involve textual communication between a criminal and a victim, or between criminals themselves. The forensic analysis of online textual documents for addressing the anonymity problem called authorship analysis is the focus of most cybercrime investigations. Authorship analysis is the statistical study of linguistic and computational characteristics of the written documents of individuals. This paper is the first work that presents a unified data mining solution to address authorship analysis problems based on the concept of frequent pattern-based writeprint. Extensive experiments on real-life data suggest that our proposed solution can precisely capture the writing styles of individuals. Furthermore, the writeprint is effective to identify the author of an anonymous text from ∗Corresponding author Email addresses: [email protected] (Farkhund Iqbal), [email protected] (Hamad Binsalleeh), [email protected] (Benjamin C. M. Fung), [email protected] (Mourad Debbabi) Preprint submitted to Information Sciences March 10, 2011 a group of suspects and to infer sociolinguistic characteristics of the author.",
"title": ""
},
{
"docid": "fb6494dcf01a927597ff784a3323e8c2",
"text": "Detection of defects in induction machine rotor bars for unassembled motors is required to evaluate machines considered for repair as well as fulfilling incremental quality assurance checks in the manufacture of new machines. Detection of rotor bar defects prior to motor assembly are critical in increasing repair efficiency and assuring the quality of newly manufactured machines. Many methods of detecting rotor bar defects in unassembled motors lack the sensitivity to find both major and minor defects in both cast and fabricated rotors along with additional deficiencies in quantifiable test results and arc-flash safety hazards. A process of direct magnetic field analysis can examine measurements from induced currents in a rotor separated from its stator yielding a high-resolution fingerprint of a rotor's magnetic field. This process identifies both major and minor rotor bar defects in a repeatable and quantifiable manner appropriate for numerical evaluation without arc-flash safety hazards.",
"title": ""
},
{
"docid": "d0e5ddcc0aa85ba6a3a18796c335dcd2",
"text": "A novel planar end-fire circularly polarized (CP) complementary Yagi array antenna is proposed. The antenna has a compact and complementary structure, and exhibits excellent properties (low profile, single feed, broadband, high gain, and CP radiation). It is based on a compact combination of a pair of complementary Yagi arrays with a common driven element. In the complementary structure, the vertical polarization is contributed by a microstrip patch Yagi array, while the horizontal polarization is yielded by a strip dipole Yagi array. With the combination of the two orthogonally polarized Yagi arrays, a CP antenna with high gain and wide bandwidth is obtained. With a profile of <inline-formula> <tex-math notation=\"LaTeX\">$0.05\\lambda _{\\mathrm{0}}$ </tex-math></inline-formula> (3 mm), the antenna has a gain of about 8 dBic, an impedance bandwidth (<inline-formula> <tex-math notation=\"LaTeX\">$\\vert S_{11}\\vert < -10 $ </tex-math></inline-formula> dB) of 13.09% (4.57–5.21 GHz) and a 3-dB axial-ratio bandwidth of 10.51% (4.69–5.21 GHz).",
"title": ""
},
{
"docid": "70c6da9da15ad40b4f64386b890ccf51",
"text": "In this paper, we describe a positioning control for a SCARA robot using a recurrent neural network. The simultaneous perturbation optimization method is used for the learning rule of the recurrent neural network. Then the recurrent neural network learns inverse dynamics of the SCARA robot. We present details of the control scheme using the simultaneous perturbation. Moreover, we consider an example for two target positions using an actual SCARA robot. The result is shown.",
"title": ""
},
{
"docid": "0fb45311d5e6a7348917eaa12ffeab46",
"text": "Question Answering is a task which requires building models capable of providing answers to questions expressed in human language. Full question answering involves some form of reasoning ability. We introduce a neural network architecture for this task, which is a form of Memory Network, that recognizes entities and their relations to answers through a focus attention mechanism. Our model is named Question Dependent Recurrent Entity Network and extends Recurrent Entity Network by exploiting aspects of the question during the memorization process. We validate the model on both synthetic and real datasets: the bAbI question answering dataset and the CNN & Daily News reading comprehension dataset. In our experiments, the models achieved a State-ofThe-Art in the former and competitive results in the latter.",
"title": ""
},
{
"docid": "decbbd09bcf7a36a3886d52864e9a08c",
"text": "INTRODUCTION\nBirth preparedness and complication readiness (BPCR) is a strategy to promote timely use of skilled maternal and neonatal care during childbirth. According to World Health Organization, BPCR should be a key component of focused antenatal care. Dakshina Kannada, a coastal district of Karnataka state, is categorized as a high-performing district (institutional delivery rate >25%) under the National Rural Health Mission. However, a substantial proportion of women in the district experience complications during pregnancy (58.3%), childbirth (45.7%), and postnatal (17.4%) period. There is a paucity of data on BPCR practice and the factors associated with it in the district. Exploring this would be of great use in the evidence-based fine-tuning of ongoing maternal and child health interventions.\n\n\nOBJECTIVE\nTo assess BPCR practice and the factors associated with it among the beneficiaries of two rural Primary Health Centers (PHCs) of Dakshina Kannada district, Karnataka, India.\n\n\nMETHODS\nA facility-based cross-sectional study was conducted among 217 pregnant (>28 weeks of gestation) and recently delivered (in the last 6 months) women in two randomly selected PHCs from June -September 2013. Exit interviews were conducted using a pre-designed semi-structured interview schedule. Information regarding socio-demographic profile, obstetric variables, and knowledge of key danger signs was collected. BPCR included information on five key components: identified the place of delivery, saved money to pay for expenses, mode of transport identified, identified a birth companion, and arranged a blood donor if the need arises. In this study, a woman who recalled at least two key danger signs in each of the three phases, i.e., pregnancy, childbirth, and postpartum (total six) was considered as knowledgeable on key danger signs. Optimal BPCR practice was defined as following at least three out of five key components of BPCR.\n\n\nOUTCOME MEASURES\nProportion, Odds ratio, and adjusted Odds ratio (adj OR) for optimal BPCR practice.\n\n\nRESULTS\nA total of 184 women completed the exit interview (mean age: 26.9±3.9 years). Optimal BPCR practice was observed in 79.3% (95% CI: 73.5-85.2%) of the women. Multivariate logistic regression revealed that age >26 years (adj OR = 2.97; 95%CI: 1.15-7.7), economic status of above poverty line (adj OR = 4.3; 95%CI: 1.12-16.5), awareness of minimum two key danger signs in each of the three phases, i.e., pregnancy, childbirth, and postpartum (adj OR = 3.98; 95%CI: 1.4-11.1), preference to private health sector for antenatal care/delivery (adj OR = 2.9; 95%CI: 1.1-8.01), and woman's discussion about the BPCR with her family members (adj OR = 3.4; 95%CI: 1.1-10.4) as the significant factors associated with optimal BPCR practice.\n\n\nCONCLUSION\nIn this study population, BPCR practice was better than other studies reported from India. Healthcare workers at the grassroots should be encouraged to involve women's family members while explaining BPCR and key danger signs with a special emphasis on young (<26 years) and economically poor women. Ensuring a reinforcing discussion between woman and her family members may further enhance the BPCR practice.",
"title": ""
},
{
"docid": "91eaef6e482601533656ca4786b7a023",
"text": "Budget optimization is one of the primary decision-making issues faced by advertisers in search auctions. A quality budget optimization strategy can significantly improve the effectiveness of search advertising campaigns, thus helping advertisers to succeed in the fierce competition of online marketing. This paper investigates budget optimization problems in search advertisements and proposes a novel hierarchical budget optimization framework (BOF), with consideration of the entire life cycle of advertising campaigns. Then, we formulated our BOF framework, made some mathematical analysis on some desirable properties, and presented an effective solution algorithm. Moreover, we established a simple but illustrative instantiation of our BOF framework which can help advertisers to allocate and adjust the budget of search advertising campaigns. Our BOF framework provides an open testbed environment for various strategies of budget allocation and adjustment across search advertising markets. With field reports and logs from real-world search advertising campaigns, we designed some experiments to evaluate the effectiveness of our BOF framework and instantiated strategies. Experimental results are quite promising, where our BOF framework and instantiated strategies perform better than two baseline budget strategies commonly used in practical advertising campaigns.",
"title": ""
},
{
"docid": "bba4d637cf40e81ea89e61e875d3c425",
"text": "Recent years have witnessed the fast development of UAVs (unmanned aerial vehicles). As an alternative to traditional image acquisition methods, UAVs bridge the gap between terrestrial and airborne photogrammetry and enable flexible acquisition of high resolution images. However, the georeferencing accuracy of UAVs is still limited by the low-performance on-board GNSS and INS. This paper investigates automatic geo-registration of an individual UAV image or UAV image blocks by matching the UAV image(s) with a previously taken georeferenced image, such as an individual aerial or satellite image with a height map attached or an aerial orthophoto with a DSM (digital surface model) attached. As the biggest challenge for matching UAV and aerial images is in the large differences in scale and rotation, we propose a novel feature matching method for nadir or slightly tilted images. The method is comprised of a dense feature detection scheme, a one-to-many matching strategy and a global geometric verification scheme. The proposed method is able to find thousands of valid matches in cases where SIFT and ASIFT fail. Those matches can be used to geo-register the whole UAV image block towards the reference image data. When the reference images offer high georeferencing accuracy, the UAV images can also be geolocalized in a global coordinate system. A series of experiments involving different scenarios was conducted to validate the proposed method. The results demonstrate that our approach achieves not only decimeter-level registration accuracy, but also comparable global accuracy as the reference images.",
"title": ""
}
] | scidocsrr |
6964ce910279f7c1e3eaec5191d4cf7f | A Learning-based Neural Network Model for the Detection and Classification of SQL Injection Attacks | [
{
"docid": "5025766e66589289ccc31e60ca363842",
"text": "The use of web applications has become increasingly popular in our routine activities, such as reading the news, paying bills, and shopping on-line. As the availability of these services grows, we are witnessing an increase in the number and sophistication of attacks that target them. In particular, SQL injection, a class of code-injection attacks in which specially crafted input strings result in illegal queries to a database, has become one of the most serious threats to web applications. In this paper we present and evaluate a new technique for detecting and preventing SQL injection attacks. Our technique uses a model-based approach to detect illegal queries before they are executed on the database. In its static part, the technique uses program analysis to automatically build a model of the legitimate queries that could be generated by the application. In its dynamic part, the technique uses runtime monitoring to inspect the dynamically-generated queries and check them against the statically-built model. We developed a tool, AMNESIA, that implements our technique and used the tool to evaluate the technique on seven web applications. In the evaluation we targeted the subject applications with a large number of both legitimate and malicious inputs and measured how many attacks our technique detected and prevented. The results of the study show that our technique was able to stop all of the attempted attacks without generating any false positives.",
"title": ""
}
] | [
{
"docid": "b743159683f5cb99e7b5252dbc9ae74f",
"text": "When human agents come together to make decisions it is often the case that one human agent has more information than the other and this phenomenon is called information asymmetry and this distorts the market. Often if one human agent intends to manipulate a decision in its favor the human agent can signal wrong or right information. Alternatively, one human agent can screen for information to reduce the impact of asymmetric information on decisions. With the advent of artificial intelligence, signaling and screening have been made easier. This chapter studies the impact of artificial intelligence on the theory of asymmetric information. It is surmised that artificial intelligent agents reduce the degree of information asymmetry and thus the market where these agents are deployed become more efficient. It is also postulated that the more artificial intelligent agents there are deployed in the market the less is the volume of trades in the market. This is because for trade to happen the asymmetry of information on goods and services to be traded should exist.",
"title": ""
},
{
"docid": "4995bb31547a98adbe98c7a9f2bfa947",
"text": "This paper describes our proposed solutions designed for a STS core track within the SemEval 2016 English Semantic Textual Similarity (STS) task. Our method of similarity detection combines recursive autoencoders with a WordNet award-penalty system that accounts for semantic relatedness, and an SVM classifier, which produces the final score from similarity matrices. This solution is further supported by an ensemble classifier, combining an aligner with a bi-directional Gated Recurrent Neural Network and additional features, which then performs Linear Support Vector Regression to determine another set of scores.",
"title": ""
},
{
"docid": "49215cb8cb669aef5ea42dfb1e7d2e19",
"text": "Many people rely on Web-based tutorials to learn how to use complex software. Yet, it remains difficult for users to systematically explore the set of tutorials available online. We present Sifter, an interface for browsing, comparing and analyzing large collections of image manipulation tutorials based on their command-level structure. Sifter first applies supervised machine learning to identify the commands contained in a collection of 2500 Photoshop tutorials obtained from the Web. It then provides three different views of the tutorial collection based on the extracted command-level structure: (1) A Faceted Browser View allows users to organize, sort and filter the collection based on tutorial category, command names or on frequently used command subsequences, (2) a Tutorial View summarizes and indexes tutorials by the commands they contain, and (3) an Alignment View visualizes the commandlevel similarities and differences between a subset of tutorials. An informal evaluation (n=9) suggests that Sifter enables users to successfully perform a variety of browsing and analysis tasks that are difficult to complete with standard keyword search. We conclude with a meta-analysis of our Photoshop tutorial collection and present several implications for the design of image manipulation software. ACM Classification H5.2 [Information interfaces and presentation]: User Interfaces. Graphical user interfaces. Author",
"title": ""
},
{
"docid": "889dd22fcead3ce546e760bda8ef4980",
"text": "We explore unsupervised approaches to relation extraction between two named entities; for instance, the semantic bornIn relation between a person and location entity. Concretely, we propose a series of generative probabilistic models, broadly similar to topic models, each which generates a corpus of observed triples of entity mention pairs and the surface syntactic dependency path between them. The output of each model is a clustering of observed relation tuples and their associated textual expressions to underlying semantic relation types. Our proposed models exploit entity type constraints within a relation as well as features on the dependency path between entity mentions. We examine effectiveness of our approach via multiple evaluations and demonstrate 12% error reduction in precision over a state-of-the-art weakly supervised baseline.",
"title": ""
},
{
"docid": "ab47d6b0ae971a5cf0a24f1934fbee63",
"text": "Deep representations, in particular ones implemented by convolutional neural networks, have led to good progress on many learning problems. However, the learned representations are hard to analyze and interpret, even when they are extracted from visual data. We propose a new approach to study deep image representations by inverting them with an up-convolutional neural network. Application of this method to a deep network trained on ImageNet provides numerous insights into the properties of the feature representation. Most strikingly, the colors and the rough contours of an input image can be reconstructed from activations in higher network layers and even from the predicted class probabilities.",
"title": ""
},
{
"docid": "8a9603a10e5e02f6edfbd965ee11bbb9",
"text": "The alerts produced by network-based intrusion detection systems, e.g. Snort, can be difficult for network administrators to efficiently review and respond to due to the enormous number of alerts generated in a short time frame. This work describes how the visualization of raw IDS alert data assists network administrators in understanding the current state of a network and quickens the process of reviewing and responding to intrusion attempts. The project presented in this work consists of three primary components. The first component provides a visual mapping of the network topology that allows the end-user to easily browse clustered alerts. The second component is based on the flocking behavior of birds such that birds tend to follow other birds with similar behaviors. This component allows the end-user to see the clustering process and provides an efficient means for reviewing alert data. The third component discovers and visualizes patterns of multistage attacks by profiling the attacker’s behaviors.",
"title": ""
},
{
"docid": "f176f95d0c597b4272abe907e385befc",
"text": "This paper addresses the problem of topic distillation on the World Wide Web, namely, given a typical user query to find quality documents related to the query topic. Connectivity analysis has been shown to be useful in identifying high quality pages within a topic specific graph of hyperlinked documents. The essence of our approach is to augment a previous connectivity analysis based algorithm with content analysis. We identify three problems with the existing approach and devise algorithms to tackle them. The results of a user evaluation are reported that show an improvement of precision at 10 documents by at least 45% over pure connectivity analysis.",
"title": ""
},
{
"docid": "300e215e91bb49aef0fcb44c3084789e",
"text": "We present an extension to the Tacotron speech synthesis architecture that learns a latent embedding space of prosody, derived from a reference acoustic representation containing the desired prosody. We show that conditioning Tacotron on this learned embedding space results in synthesized audio that matches the prosody of the reference signal with fine time detail even when the reference and synthesis speakers are different. Additionally, we show that a reference prosody embedding can be used to synthesize text that is different from that of the reference utterance. We define several quantitative and subjective metrics for evaluating prosody transfer, and report results with accompanying audio samples from single-speaker and 44-speaker Tacotron models on a prosody transfer task.",
"title": ""
},
{
"docid": "370b1775eddfb6241078285872e1a009",
"text": "Methods for Named Entity Recognition and Disambiguation (NERD) perform NER and NED in two separate stages. Therefore, NED may be penalized with respect to precision by NER false positives, and suffers in recall from NER false negatives. Conversely, NED does not fully exploit information computed by NER such as types of mentions. This paper presents J-NERD, a new approach to perform NER and NED jointly, by means of a probabilistic graphical model that captures mention spans, mention types, and the mapping of mentions to entities in a knowledge base. We present experiments with different kinds of texts from the CoNLL’03, ACE’05, and ClueWeb’09-FACC1 corpora. J-NERD consistently outperforms state-of-the-art competitors in end-to-end NERD precision, recall, and F1.",
"title": ""
},
{
"docid": "02c00d998952d935ee694922953c78d1",
"text": "OBJECTIVE\nEffect of peppermint on exercise performance was previously investigated but equivocal findings exist. This study aimed to investigate the effects of peppermint ingestion on the physiological parameters and exercise performance after 5 min and 1 h.\n\n\nMATERIALS AND METHODS\nThirty healthy male university students were randomly divided into experimental (n=15) and control (n=15) groups. Maximum isometric grip force, vertical and long jumps, spirometric parameters, visual and audio reaction times, blood pressure, heart rate, and breath rate were recorded three times: before, five minutes, and one hour after single dose oral administration of peppermint essential oil (50 µl). Data were analyzed using repeated measures ANOVA.\n\n\nRESULTS\nOur results revealed significant improvement in all of the variables after oral administration of peppermint essential oil. Experimental group compared with control group showed an incremental and a significant increase in the grip force (36.1%), standing vertical jump (7.0%), and standing long jump (6.4%). Data obtained from the experimental group after five minutes exhibited a significant increase in the forced vital capacity in first second (FVC1)(35.1%), peak inspiratory flow rate (PIF) (66.4%), and peak expiratory flow rate (PEF) (65.1%), whereas after one hour, only PIF shown a significant increase as compare with the baseline and control group. At both times, visual and audio reaction times were significantly decreased. Physiological parameters were also significantly improved after five minutes. A considerable enhancement in the grip force, spiromery, and other parameters were the important findings of this study. Conclusion : An improvement in the spirometric measurements (FVC1, PEF, and PIF) might be due to the peppermint effects on the bronchial smooth muscle tonicity with or without affecting the lung surfactant. Yet, no scientific evidence exists regarding isometric force enhancement in this novel study.",
"title": ""
},
{
"docid": "620642c5437dc26cac546080c4465707",
"text": "One of the most distinctive linguistic characteristics of modern academic writing is its reliance on nominalized structures. These include nouns that have been morphologically derived from verbs (e.g., development, progression) as well as verbs that have been ‘converted’ to nouns (e.g., increase, use). Almost any sentence taken from an academic research article will illustrate the use of such structures. For example, consider the opening sentences from three education research articles; derived nominalizations are underlined and converted nouns given in italics: 1",
"title": ""
},
{
"docid": "162a4cab1ea0bd1e9b8980a57df7c2bf",
"text": "This paper investigates the design of power and spectrally efficient coded modulations based on amplitude phase shift keying (APSK) with application to broadband satellite communications. Emphasis is put on 64APSK constellations. The APSK modulation has merits for digital transmission over nonlinear satellite channels due to its power and spectral efficiency combined with its inherent robustness against nonlinear distortion. This scheme has been adopted in the DVB-S2 Standard for satellite digital video broadcasting. Assuming an ideal rectangular transmission pulse, for which no nonlinear inter-symbol interference is present and perfect pre-compensation of the nonlinearity takes place, we optimize the 64APSK constellation design by employing an optimization criterion based on the mutual information. This method generates an optimum constellation for each operating SNR point, that is, for each spectral efficiency. Two separate cases of interest are particularly examined: (i) the equiprobable case, where all constellation points are equiprobable and (ii) the non-equiprobable case, where the constellation points on each ring are assumed to be equiprobable but the a priory symbol probability associated per ring is assumed different for each ring. Following the mutual information-based optimization approach in each case, detailed simulation results are obtained for the optimal 64APSK constellation settings as well as the achievable shaping gain.",
"title": ""
},
{
"docid": "25822c79792325b86a90a477b6e988a1",
"text": "As the social networking sites get more popular, spammers target these sites to spread spam posts. Twitter is one of the most popular online social networking sites where users communicate and interact on various topics. Most of the current spam filtering methods in Twitter focus on detecting the spammers and blocking them. However, spammers can create a new account and start posting new spam tweets again. So there is a need for robust spam detection techniques to detect the spam at tweet level. These types of techniques can prevent the spam in real time. To detect the spam at tweet level, often features are defined, and appropriate machine learning algorithms are applied in the literature. Recently, deep learning methods are showing fruitful results on several natural language processing tasks. We want to use the potential benefits of these two types of methods for our problem. Toward this, we propose an ensemble approach for spam detection at tweet level. We develop various deep learning models based on convolutional neural networks (CNNs). Five CNNs and one feature-based model are used in the ensemble. Each CNN uses different word embeddings (Glove, Word2vec) to train the model. The feature-based model uses content-based, user-based, and n-gram features. Our approach combines both deep learning and traditional feature-based models using a multilayer neural network which acts as a meta-classifier. We evaluate our method on two data sets, one data set is balanced, and another one is imbalanced. The experimental results show that our proposed method outperforms the existing methods.",
"title": ""
},
{
"docid": "e30db40102a2d84a150c220250fa4d36",
"text": "A voltage reference circuit operating with all transistors biased in weak inversion, providing a mean reference voltage of 257.5 mV, has been fabricated in 0.18 m CMOS technology. The reference voltage can be approximated by the difference of transistor threshold voltages at room temperature. Accurate subthreshold design allows the circuit to work at room temperature with supply voltages down to 0.45 V and an average current consumption of 5.8 nA. Measurements performed over a set of 40 samples showed an average temperature coefficient of 165 ppm/ C with a standard deviation of 100 ppm/ C, in a temperature range from 0 to 125°C. The mean line sensitivity is ≈0.44%/V, for supply voltages ranging from 0.45 to 1.8 V. The power supply rejection ratio measured at 30 Hz and simulated at 10 MHz is lower than -40 dB and -12 dB, respectively. The active area of the circuit is ≈0.043mm2.",
"title": ""
},
{
"docid": "ce2f8135fe123e09b777bd147bec4bb3",
"text": "Supervised learning, e.g., classification, plays an important role in processing and organizing microblogging data. In microblogging, it is easy to mass vast quantities of unlabeled data, but would be costly to obtain labels, which are essential for supervised learning algorithms. In order to reduce the labeling cost, active learning is an effective way to select representative and informative instances to query for labels for improving the learned model. Different from traditional data in which the instances are assumed to be independent and identically distributed (i.i.d.), instances in microblogging are networked with each other. This presents both opportunities and challenges for applying active learning to microblogging data. Inspired by social correlation theories, we investigate whether social relations can help perform effective active learning on networked data. In this paper, we propose a novel Active learning framework for the classification of Networked Texts in microblogging (ActNeT). In particular, we study how to incorporate network information into text content modeling, and design strategies to select the most representative and informative instances from microblogging for labeling by taking advantage of social network structure. Experimental results on Twitter datasets show the benefit of incorporating network information in active learning and that the proposed framework outperforms existing state-of-the-art methods.",
"title": ""
},
{
"docid": "7b916833f0d611465e36b0b2792b2fa7",
"text": "A fully-integrated silicon-based 94-GHz direct-detection imaging receiver with on-chip Dicke switch and baseband circuitry is demonstrated. Fabricated in a 0.18-µm SiGe BiCMOS technology (fT/fMAX = 200 GHz), the receiver chip achieves a peak imager responsivity of 43 MV/W with a 3-dB bandwidth of 26 GHz. A balanced LNA topology with an embedded Dicke switch provides 30-dB gain and enables a temperature resolution of 0.3–0.4 K. The imager chip consumes 200 mW from a 1.8-V supply.",
"title": ""
},
{
"docid": "d6bbec8d1426cacba7f8388231f04add",
"text": "This paper presents a novel multiple-frequency resonant inverter for induction heating (IH) applications. By adopting a center tap transformer, the proposed resonant inverter can give load switching frequency as twice as the isolated-gate bipolar transistor (IGBT) switching frequency. The structure and the operation of the proposed topology are described in order to demonstrate how the output frequency of the proposed resonant inverter is as twice as the switching frequency of IGBTs. In addition to this, the IGBTs in the proposed topology work in zero-voltage switching during turn-on phase of the switches. The new topology is verified by the experimental results using a prototype for IH applications. Moreover, increased efficiency of the proposed inverter is verified by comparison with conventional designs.",
"title": ""
},
{
"docid": "62d1fc9ea1c6a5d1f64939eff3202dad",
"text": "This research applied both the traditional and the fuzzy control methods for mobile satellite antenna tracking system design. The antenna tracking and the stabilization loops were designed firstly according to the bandwidth and phase margin requirements. However, the performance would be degraded if the tracking loop gain is reduced due to parameter variation. On the other hand a PD type of fuzzy controller was also applied for tracking loop design. It can be seen that the system performance obtained by the fuzzy controller was better for low antenna tracking gain. Thus this research proposed an adaptive law by taking either traditional or fuzzy controllers for antenna tracking system depending on the tracking loop gain, then the tracking gain parameter variation effect can be reduced.",
"title": ""
},
{
"docid": "1f4c22a725fb5cb34bb1a087ba47987e",
"text": "This paper demonstrates key capabilities of Cognitive Database, a novel AI-enabled relational database system which uses an unsupervised neural network model to facilitate semantic queries over relational data. The neural network model, called word embedding, operates on an unstructured view of the database and builds a vector model that captures latent semantic context of database entities of different types. The vector model is then seamlessly integrated into the SQL infrastructure and exposed to the users via a new class of SQL-based analytics queries known as cognitive intelligence (CI) queries. The cognitive capabilities enable complex queries over multi-modal data such as semantic matching, inductive reasoning queries such as analogies, and predictive queries using entities not present in a database. We plan to demonstrate the end-to-end execution flow of the cognitive database using a Spark based prototype. Furthermore, we demonstrate the use of CI queries using a publicaly available enterprise financial dataset (with text and numeric values). A Jupyter Notebook python based implementation will also be presented.",
"title": ""
}
] | scidocsrr |
451a376941d11616feea90f81cf4ea7d | Gami fi cation and Mobile Marketing Effectiveness | [
{
"docid": "84647b51dbbe755534e1521d9d9cf843",
"text": "Social Mediator is a forum exploring the ways that HCI research and principles interact---or might interact---with practices in the social media world.<br /><b><i>Joe McCarthy, Editor</i></b>",
"title": ""
}
] | [
{
"docid": "1286a39cec0d00f269c7490fb38f422b",
"text": "BACKGROUND\nAttention-deficit/hyperactivity disorder (ADHD) is one of the most common developmental disorders experienced in childhood and can persist into adulthood. The disorder has early onset and is characterized by a combination of overactive, poorly modulated behavior with marked inattention. In the long term it can impair academic performance, vocational success and social-emotional development. Meditation is increasingly used for psychological conditions and could be used as a tool for attentional training in the ADHD population.\n\n\nOBJECTIVES\nTo assess the effectiveness of meditation therapies as a treatment for ADHD.\n\n\nSEARCH STRATEGY\nOur extensive search included: CENTRAL, MEDLINE, EMBASE, CINAHL, ERIC, PsycINFO, C2-SPECTR, dissertation abstracts, LILACS, Virtual Health Library (VHL) in BIREME, Complementary and Alternative Medicine specific databases, HSTAT, Informit, JST, Thai Psychiatric databases and ISI Proceedings, plus grey literature and trial registries from inception to January 2010.\n\n\nSELECTION CRITERIA\nRandomized controlled trials that investigated the efficacy of meditation therapy in children or adults diagnosed with ADHD.\n\n\nDATA COLLECTION AND ANALYSIS\nTwo authors extracted data independently using a pre-designed data extraction form. We contacted study authors for additional information required. We analyzed data using mean difference (MD) to calculate the treatment effect. The results are presented in tables, figures and narrative form.\n\n\nMAIN RESULTS\nFour studies, including 83 participants, are included in this review. Two studies used mantra meditation while the other two used yoga compared with drugs, relaxation training, non-specific exercises and standard treatment control. Design limitations caused high risk of bias across the studies. Only one out of four studies provided data appropriate for analysis. For this study there was no statistically significant difference between the meditation therapy group and the drug therapy group on the teacher rating ADHD scale (MD -2.72, 95% CI -8.49 to 3.05, 15 patients). Likewise, there was no statistically significant difference between the meditation therapy group and the standard therapy group on the teacher rating ADHD scale (MD -0.52, 95% CI -5.88 to 4.84, 17 patients). There was also no statistically significant difference between the meditation therapy group and the standard therapy group in the distraction test (MD -8.34, 95% CI -107.05 to 90.37, 17 patients).\n\n\nAUTHORS' CONCLUSIONS\nAs a result of the limited number of included studies, the small sample sizes and the high risk of bias, we are unable to draw any conclusions regarding the effectiveness of meditation therapy for ADHD. The adverse effects of meditation have not been reported. More trials are needed.",
"title": ""
},
{
"docid": "1b5dd28d1cb6fedeb24d7ac5195595c6",
"text": "Modulation recognition algorithms have recently received a great deal of attention in academia and industry. In addition to their application in the military field, these algorithms found civilian use in reconfigurable systems, such as cognitive radios. Most previously existing algorithms are focused on recognition of a single modulation. However, a multiple-input multiple-output two-way relaying channel (MIMO TWRC) with physical-layer network coding (PLNC) requires the recognition of the pair of sources modulations from the superposed constellation at the relay. In this paper, we propose an algorithm for recognition of sources modulations for MIMO TWRC with PLNC. The proposed algorithm is divided in two steps. The first step uses the higher order statistics based features in conjunction with genetic algorithm as a features selection method, while the second step employs AdaBoost as a classifier. Simulation results show the ability of the proposed algorithm to provide a good recognition performance at acceptable signal-to-noise values.",
"title": ""
},
{
"docid": "ca24d679117baf6f262609a5e4c1acfa",
"text": "Fake news pose serious threat to our society nowadays, particularly due to its wide spread through social networks. While human fact checkers cannot handle such tremendous information online in real time, AI technology can be leveraged to automate fake news detection. The first step leading to a sophisticated fake news detection system is the stance detection between statement and body text. In this work, we analyze the dataset from Fake News Challenge (FNC1) and explore several neural stance detection models based on the ideas of natural language inference and machine comprehension. Experiment results show that all neural network models can outperform the hand-crafted feature based system. By improving Attentive Reader with a full attention mechanism between body text and headline and implementing bilateral multi-perspective mathcing models, we are able to further bring up the performance and reach metric score close to 87%.",
"title": ""
},
{
"docid": "4d6559e3216836c475b4b069aa924a88",
"text": "HAL is a multi-disciplinary open access archive for the deposit and dissemination of scientific research documents, whether they are published or not. The documents may come from teaching and research institutions in France or abroad, or from public or private research centers. L’archive ouverte pluridisciplinaire HAL, est destinée au dépôt et à la diffusion de documents scientifiques de niveau recherche, publiés ou non, émanant des établissements d’enseignement et de recherche français ou étrangers, des laboratoires publics ou privés. Asteroids. From Observations to Models. D. Hestroffer, Paolo Tanga",
"title": ""
},
{
"docid": "cf6d0e1b0fd5a258fdcdb5a9fe8d2b65",
"text": "UNLABELLED\nPrevious studies have shown that resistance training with restricted venous blood flow (Kaatsu) results in significant strength gains and muscle hypertrophy. However, few studies have examined the concurrent vascular responses following restrictive venous blood flow training protocols.\n\n\nPURPOSE\nThe purpose of this study was to examine the effects of 4 wk of handgrip exercise training, with and without venous restriction, on handgrip strength and brachial artery flow-mediated dilation (BAFMD).\n\n\nMETHODS\nTwelve participants (mean +/- SD: age = 22 +/- 1 yr, men = 5, women = 7) completed 4 wk of bilateral handgrip exercise training (duration = 20 min, intensity = 60% of the maximum voluntary contraction, cadence = 15 grips per minute, frequency = three sessions per week). During each session, venous blood flow was restricted in one arm (experimental (EXP) arm) using a pneumatic cuff placed 4 cm proximal to the antecubital fossa and inflated to 80 mm Hg for the duration of each exercise session. The EXP and the control (CON) arms were randomly selected. Handgrip strength was measured using a hydraulic hand dynamometer. Brachial diameters and blood velocity profiles were assessed, using Doppler ultrasonography, before and after 5 min of forearm occlusion (200 mm Hg) before and at the end of the 4-wk exercise.\n\n\nRESULTS\nAfter exercise training, handgrip strength increased 8.32% (P = 0.05) in the CON arm and 16.17% (P = 0.05) in the EXP arm. BAFMD increased 24.19% (P = 0.0001) in the CON arm and decreased 30.36% (P = 0.0001) in the EXP arm.\n\n\nCONCLUSIONS\nThe data indicate handgrip training combined with venous restriction results in superior strength gains but reduced BAFMD compared with the nonrestricted arm.",
"title": ""
},
{
"docid": "984f7a2023a14efbbd5027abfc12a586",
"text": "Name ambiguity stems from the fact that many people or objects share identical names in the real world. Such name ambiguity decreases the performance of document retrieval, Web search, information integration, and may cause confusion in other applications. Due to the same name spellings and lack of information, it is a nontrivial task to distinguish them accurately. In this article, we focus on investigating the problem in digital libraries to distinguish publications written by authors with identical names. We present an effective framework named GHOST (abbreviation for GrapHical framewOrk for name diSambiguaTion), to solve the problem systematically. We devise a novel similarity metric, and utilize only one type of attribute (i.e., coauthorship) in GHOST. Given the similarity matrix, intermediate results are grouped into clusters with a recently introduced powerful clustering algorithm called Affinity Propagation. In addition, as a complementary technique, user feedback can be used to enhance the performance. We evaluated the framework on the real DBLP and PubMed datasets, and the experimental results show that GHOST can achieve both high precision and recall.",
"title": ""
},
{
"docid": "eff903cb53fc7f7e9719a2372d517ab3",
"text": "The freshwater angelfishes (Pterophyllum) are South American cichlids that have become very popular among aquarists, yet scarce information on their culture and aquarium husbandry exists. We studied Pterophyllum scalare to analyze dietary effects on fecundity, growth, and survival of eggs and larvae during 135 days. Three diets were used: A) decapsulated cysts of Artemia, B) commercial dry fish food, and C) a mix diet of the rotifer Brachionus plicatilis and the cladoceran Daphnia magna. The initial larval density was 100 organisms in each 40 L aquarium. With diet A, larvae reached a maximum weight of 3.80 g, a total length of 6.3 cm, and a height of 5.8 cm; with diet B: 2.80 g, 4.81 cm, and 4.79 cm, and with diet C: 3.00 g, 5.15 cm, and 5.10 cm, respectively. Significant differences were observed between diet A, and diet B and C, but no significantly differences were observed between diets B and C. Fecundity varied from 234 to 1,082 eggs in 20 and 50 g females, respectively. Egg survival ranged from 87.4% up to 100%, and larvae survival (80 larvae/40 L aquarium) from 50% to 66.3% using diet B and A, respectively. Live food was better for growing fish than the commercial balanced food diet. Fecundity and survival are important factors in planning a good production of angelfish.",
"title": ""
},
{
"docid": "2d73a7ab1e5a784d4755ed2fe44078db",
"text": "Over the last years, many papers have been published about how to use machine learning for classifying postings on microblogging platforms like Twitter, e.g., in order to assist users to reach tweets that interest them. Typically, the automatic classification results are then evaluated against a gold standard classification which consists of either (i) the hashtags of the tweets' authors, or (ii) manual annotations of independent human annotators. In this paper, we show that there are fundamental differences between these two kinds of gold standard classifications, i.e., human annotators are more likely to classify tweets like other human annotators than like the tweets' authors. Furthermore, we discuss how these differences may influence the evaluation of automatic classifications, like they may be achieved by Latent Dirichlet Allocation (LDA). We argue that researchers who conduct machine learning experiments for tweet classification should pay particular attention to the kind of gold standard they use. One may even argue that hashtags are not appropriate as a gold standard for tweet classification.",
"title": ""
},
{
"docid": "1f50a6d6e7c48efb7ffc86bcc6a8271d",
"text": "Creating short summaries of documents with respect to a query has applications in for example search engines, where it may help inform users of the most relevant results. Constructing such a summary automatically, with the potential expressiveness of a human-written summary, is a difficult problem yet to be fully solved. In this thesis, a neural network model for this task is presented. We adapt an existing dataset of news article summaries for the task and train a pointer-generator model using this dataset to summarize such articles. The generated summaries are then evaluated by measuring similarity to reference summaries. We observe that the generated summaries exhibit abstractive properties, but also that they have issues, such as rarely being truthful. However, we show that a neural network summarization model, similar to existing neural network models for abstractive summarization, can be constructed to make use of queries for more targeted summaries.",
"title": ""
},
{
"docid": "f48639ad675b863a28bb1bc773664ab0",
"text": "The definition and phenomenological features of 'burnout' and its eventual relationship with depression and other clinical conditions are reviewed. Work is an indispensable way to make a decent and meaningful way of living, but can also be a source of stress for a variety of reasons. Feelings of inadequate control over one's work, frustrated hopes and expectations and the feeling of losing of life's meaning, seem to be independent causes of burnout, a term that describes a condition of professional exhaustion. It is not synonymous with 'job stress', 'fatigue', 'alienation' or 'depression'. Burnout is more common than generally believed and may affect every aspect of the individual's functioning, have a deleterious effect on interpersonal and family relationships and lead to a negative attitude towards life in general. Empirical research suggests that burnout and depression are separate entities, although they may share several 'qualitative' characteristics, especially in the more severe forms of burnout, and in vulnerable individuals, low levels of satisfaction derived from their everyday work. These final issues need further clarification and should be the focus of future clinical research.",
"title": ""
},
{
"docid": "92e7a7603ec6e10d5066634955386d9b",
"text": "Obfuscation-based private web search (OB-PWS) solutions allow users to search for information in the Internet while concealing their interests. The basic privacy mechanism in OB-PWS is the automatic generation of dummy queries that are sent to the search engine along with users' real requests. These dummy queries prevent the accurate inference of search profiles and provide query deniability. In this paper we propose an abstract model and an associated analysis framework to systematically evaluate the privacy protection offered by OB-PWS systems. We analyze six existing OB-PWS solutions using our framework and uncover vulnerabilities in their designs. Based on these results, we elicit a set of features that must be taken into account when analyzing the security of OB-PWS designs to avoid falling into the same pitfalls as previous proposals.",
"title": ""
},
{
"docid": "945c5c7cd9eb2046c1b164e64318e52f",
"text": "This thesis explores the design and application of artificial immune systems (AISs), problem-solving systems inspired by the human and other immune systems. AISs to date have largely been modelled on the biological adaptive immune system and have taken little inspiration from the innate immune system. The first part of this thesis examines the biological innate immune system, which controls the adaptive immune system. The importance of the innate immune system suggests that AISs should also incorporate models of the innate immune system as well as the adaptive immune system. This thesis presents and discusses a number of design principles for AISs which are modelled on both innate and adaptive immunity. These novel design principles provided a structured framework for developing AISs which incorporate innate and adaptive immune systems in general. These design principles are used to build a software system which allows such AISs to be implemented and explored. AISs, as well as being inspired by the biological immune system, are also built to solve problems. In this thesis, using the software system and design principles we have developed, we implement several novel AISs and apply them to the problem of detecting attacks on computer systems. These AISs monitor programs running on a computer and detect whether the program is behaving abnormally or being attacked. The development of these AISs shows in more detail how AISs built on the design principles can be instantiated. In particular, we show how the use of AISs which incorporate both innate and adaptive immune system mechanisms can be used to reduce the number of false alerts and improve the performance of current approaches.",
"title": ""
},
{
"docid": "b9261a0d56a6305602ff27da5ec160e8",
"text": "In psychology the Rubber Hand Illusion (RHI) is an experiment where participants get the feeling that a fake hand is becoming their own. Recently, new testing methods using an action based paradigm have induced stronger RHI. However, these experiments are facing limitations because they are difficult to implement and lack of rigorous experimental conditions. This paper proposes a low-cost open source robotic hand which is easy to manufacture and removes these limitations. This device reproduces fingers movement of the participants in real time. A glove containing sensors is worn by the participant and records fingers flexion. Then a microcontroller drives hobby servo-motors on the robotic hand to reproduce the corresponding fingers position. A connection between the robotic device and a computer can be established, enabling the experimenters to tune precisely the desired parameters using Matlab. Since this is the first time a robotic hand is developed for the RHI, a validation study has been conducted. This study confirms previous results found in the literature. This study also illustrates the fact that the robotic hand can be used to conduct innovative experiments in the RHI field. Understanding such RHI is important because it can provide guidelines for prosthetic design.",
"title": ""
},
{
"docid": "eed5c66d0302c492f2480a888678d1dc",
"text": "In 1988 Kennedy and Chua introduced the dynamical canonical nonlinear programming circuit (NPC) to solve in real time nonlinear programming problems where the objective function and the constraints are smooth (twice continuously differentiable) functions. In this paper, a generalized circuit is introduced (G-NPC), which is aimed at solving in real time a much wider class of nonsmooth nonlinear programming problems where the objective function and the constraints are assumed to satisfy only the weak condition of being regular functions. G-NPC, which derives from a natural extension of NPC, has a neural-like architecture and also features the presence of constraint neurons modeled by ideal diodes with infinite slope in the conducting region. By using the Clarke's generalized gradient of the involved functions, G-NPC is shown to obey a gradient system of differential inclusions, and its dynamical behavior and optimization capabilities, both for convex and nonconvex problems, are rigorously analyzed in the framework of nonsmooth analysis and the theory of differential inclusions. In the special important case of linear and quadratic programming problems, salient dynamical features of G-NPC, namely the presence of sliding modes , trajectory convergence in finite time, and the ability to compute the exact optimal solution of the problem being modeled, are uncovered and explained in the developed analytical framework.",
"title": ""
},
{
"docid": "b91204ac8a118fcde9a774e925f24a7e",
"text": "Document clustering has been recognized as a central problem in text data management. Such a problem becomes particularly challenging when document contents are characterized by subtopical discussions that are not necessarily relevant to each other. Existing methods for document clustering have traditionally assumed that a document is an indivisible unit for text representation and similarity computation, which may not be appropriate to handle documents with multiple topics. In this paper, we address the problem of multi-topic document clustering by leveraging the natural composition of documents in text segments that are coherent with respect to the underlying subtopics. We propose a novel document clustering framework that is designed to induce a document organization from the identification of cohesive groups of segment-based portions of the original documents. We empirically give evidence of the significance of our segment-based approach on large collections of multi-topic documents, and we compare it to conventional methods for document clustering.",
"title": ""
},
{
"docid": "c7a13f85fdeb234c09237581b7a83238",
"text": "Acoustic structures of sound in Gunnison's prairie dog alarm calls are described, showing how these acoustic structures may encode information about three different predator species (red-tailed hawk-Buteo jamaicensis; domestic dog-Canis familaris; and coyote-Canis latrans). By dividing each alarm call into 25 equal-sized partitions and using resonant frequencies within each partition, commonly occurring acoustic structures were identified as components of alarm calls for the three predators. Although most of the acoustic structures appeared in alarm calls elicited by all three predator species, the frequency of occurrence of these acoustic structures varied among the alarm calls for the different predators, suggesting that these structures encode identifying information for each of the predators. A classification analysis of alarm calls elicited by each of the three predators showed that acoustic structures could correctly classify 67% of the calls elicited by domestic dogs, 73% of the calls elicited by coyotes, and 99% of the calls elicited by red-tailed hawks. The different distributions of acoustic structures associated with alarm calls for the three predator species suggest a duality of function, one of the design elements of language listed by Hockett [in Animal Sounds and Communication, edited by W. E. Lanyon and W. N. Tavolga (American Institute of Biological Sciences, Washington, DC, 1960), pp. 392-430].",
"title": ""
},
{
"docid": "cbf10563c5eb251f765b93be554b7439",
"text": "BACKGROUND\nAlthough fine-needle aspiration (FNA) is a safe and accurate diagnostic procedure for assessing thyroid nodules, it has limitations in diagnosing follicular neoplasms due to its relatively high false-positive rate. The purpose of the present study was to evaluate the diagnostic role of core-needle biopsy (CNB) for thyroid nodules with follicular neoplasm (FN) in comparison with FNA.\n\n\nMETHODS\nA series of 107 patients (24 men, 83 women; mean age, 47.4 years) from 231 FNAs and 107 patients (29 men, 78 women; mean age, 46.3 years) from 186 CNBs with FN readings, all of whom underwent surgery, from October 2008 to December 2013 were retrospectively analyzed. The false-positive rate, unnecessary surgery rate, and malignancy rate for the FNA and CNB patients according to the final diagnosis following surgery were evaluated.\n\n\nRESULTS\nThe CNB showed a significantly lower false-positive and unnecessary surgery rate than the FNA (4.7% versus 30.8%, 3.7% versus 26.2%, p < 0.001, respectively). In the FNA group, 33 patients (30.8%) had non-neoplasms, including nodular hyperplasia (n = 32) and chronic lymphocytic thyroiditis (n = 1). In the CNB group, 5 patients (4.7%) had non-neoplasms, all of which were nodular hyperplasia. Moreover, the CNB group showed a significantly higher malignancy rate than FNA (57.9% versus 28%, p < 0.001).\n\n\nCONCLUSIONS\nCNB showed a significantly lower false-positive rate and a higher malignancy rate than FNA in diagnosing FN. Therefore, CNB could minimize unnecessary surgery and provide diagnostic confidence when managing patients with FN to perform surgery.",
"title": ""
},
{
"docid": "c09f3698f350ef749d3ef3e626c86788",
"text": "The te rm \"reactive system\" was introduced by David Harel and Amir Pnueli [HP85], and is now commonly accepted to designate permanent ly operating systems, and to distinguish them from \"trans]ormational systems\" i.e, usual programs whose role is to terminate with a result, computed from an initial da ta (e.g., a compiler). In synchronous programming, we understand it in a more restrictive way, distinguishing between \"interactive\" and \"reactive\" systems: Interactive systems permanent ly communicate with their environment, but at their own speed. They are able to synchronize with their environment, i.e., making it wait. Concurrent processes considered in operat ing systems or in data-base management , are generally interactive. Reactive systems, in our meaning, have to react to an environment which cannot wait. Typical examples appear when the environment is a physical process. The specific features of reactive systems have been pointed out many times [Ha193,BCG88,Ber89]:",
"title": ""
},
{
"docid": "7e720290d507c3370fc50782df3e90c4",
"text": "Photobacterium damselae subsp. piscicida is the causative agent of pasteurellosis in wild and farmed marine fish worldwide. Although serologically homogeneous, recent molecular advances have led to the discovery of distinct genetic clades, depending on geographical origin. Further details of the strategies for host colonisation have arisen including information on the role of capsule, susceptibility to oxidative stress, confirmation of intracellular survival in host epithelial cells, and induced apoptosis of host macrophages. This improved understanding has given rise to new ideas and advances in vaccine technologies, which are reviewed in this paper.",
"title": ""
}
] | scidocsrr |
0ac611db7f902244fabd8b175abad757 | Deep Learning Strong Parts for Pedestrian Detection | [
{
"docid": "ca20d27b1e6bfd1f827f967473d8bbdd",
"text": "We propose a simple yet effective detector for pedestrian detection. The basic idea is to incorporate common sense and everyday knowledge into the design of simple and computationally efficient features. As pedestrians usually appear up-right in image or video data, the problem of pedestrian detection is considerably simpler than general purpose people detection. We therefore employ a statistical model of the up-right human body where the head, the upper body, and the lower body are treated as three distinct components. Our main contribution is to systematically design a pool of rectangular templates that are tailored to this shape model. As we incorporate different kinds of low-level measurements, the resulting multi-modal & multi-channel Haar-like features represent characteristic differences between parts of the human body yet are robust against variations in clothing or environmental settings. Our approach avoids exhaustive searches over all possible configurations of rectangle features and neither relies on random sampling. It thus marks a middle ground among recently published techniques and yields efficient low-dimensional yet highly discriminative features. Experimental results on the INRIA and Caltech pedestrian datasets show that our detector reaches state-of-the-art performance at low computational costs and that our features are robust against occlusions.",
"title": ""
},
{
"docid": "6c7156d5613e1478daeb08eecb17c1e2",
"text": "The idea behind the experiments in section 4.1 of the main paper is to demonstrate that, within a single framework, varying the features can replicate the jump in detection performance over a ten-year span (2004 2014), i.e. the jump in performance between VJ and the current state-of-the-art. See figure 1 for results on INRIA and Caltech-USA of the following methods (all based on SquaresChnFtrs, described in section 4 of the paper):",
"title": ""
}
] | [
{
"docid": "9c61ac11d2804323ba44ed91d05a0e46",
"text": "Nostalgia fulfills pivotal functions for individuals, but lacks an empirically derived and comprehensive definition. We examined lay conceptions of nostalgia using a prototype approach. In Study 1, participants generated open-ended features of nostalgia, which were coded into categories. In Study 2, participants rated the centrality of these categories, which were subsequently classified as central (e.g., memories, relationships, happiness) or peripheral (e.g., daydreaming, regret, loneliness). Central (as compared with peripheral) features were more often recalled and falsely recognized (Study 3), were classified more quickly (Study 4), were judged to reflect more nostalgia in a vignette (Study 5), better characterized participants' own nostalgic (vs. ordinary) experiences (Study 6), and prompted higher levels of actual nostalgia and its intrapersonal benefits when used to trigger a personal memory, regardless of age (Study 7). These findings highlight that lay people view nostalgia as a self-relevant and social blended emotional and cognitive state, featuring a mixture of happiness and loss. The findings also aid understanding of nostalgia's functions and identify new methods for future research.",
"title": ""
},
{
"docid": "381ce2a247bfef93c67a3c3937a29b5a",
"text": "Product reviews are now widely used by individuals and organizations for decision making (Litvin et al., 2008; Jansen, 2010). And because of the profits at stake, people have been known to try to game the system by writing fake reviews to promote target products. As a result, the task of deceptive review detection has been gaining increasing attention. In this paper, we propose a generative LDA-based topic modeling approach for fake review detection. Our model can aptly detect the subtle differences between deceptive reviews and truthful ones and achieves about 95% accuracy on review spam datasets, outperforming existing baselines by a large margin.",
"title": ""
},
{
"docid": "69f853b90b837211e24155a2f55b9a95",
"text": "We introduce a light-weight, power efficient, and general purpose convolutional neural network, ESPNetv2 , for modeling visual and sequential data. Our network uses group point-wise and depth-wise dilated separable convolutions to learn representations from a large effective receptive field with fewer FLOPs and parameters. The performance of our network is evaluated on three different tasks: (1) object classification, (2) semantic segmentation, and (3) language modeling. Experiments on these tasks, including image classification on the ImageNet and language modeling on the PenTree bank dataset, demonstrate the superior performance of our method over the state-ofthe-art methods. Our network has better generalization properties than ShuffleNetv2 when tested on the MSCOCO multi-object classification task and the Cityscapes urban scene semantic segmentation task. Our experiments show that ESPNetv2 is much more power efficient than existing state-of-the-art efficient methods including ShuffleNets and MobileNets. Our code is open-source and available at https://github.com/sacmehta/ESPNetv2.",
"title": ""
},
{
"docid": "6ff681e22778abaf3b79f054fa5a1f30",
"text": "Computer generated battleeeld agents need to be able to explain the rationales for their actions. Such explanations make it easier to validate agent behavior, and can enhance the eeectiveness of the agents as training devices. This paper describes an explanation capability called Debrief that enables agents implemented in Soar to describe and justify their decisions. Debrief determines the motivation for decisions by recalling the context in which decisions were made, and determining what factors were critical to those decisions. In the process Debrief learns to recognize similar situations where the same decision would be made for the same reasons. Debrief currently being used by the TacAir-Soar tactical air agent to explain its actions , and is being evaluated for incorporation into other reactive planning agents.",
"title": ""
},
{
"docid": "64a98c3bc9aebfc470ad689b66b6d86b",
"text": "In his famous thought experiments on synthetic vehicles, Valentino Braitenberg stipulated that simple stimulus-response reactions in an organism could evoke the appearance of complex behavior, which, to the unsuspecting human observer, may even appear to be driven by emotions such as fear, aggression, and even love (Braitenberg, Vehikel. Experimente mit künstlichen Wesen, Lit Verlag, 2004). In fact, humans appear to have a strong propensity to anthropomorphize, driven by our inherent desire for predictability that will quickly lead us to discern patterns, cause-and-effect relationships, and yes, emotions, in animated entities, be they natural or artificial. But might there be reasons, that we should intentionally “implement” emotions into artificial entities, such as robots? How would we proceed in creating robot emotions? And what, if any, are the ethical implications of creating “emotional” robots? The following article aims to shed some light on these questions with a multi-disciplinary review of recent empirical investigations into the various facets of emotions in robot psychology.",
"title": ""
},
{
"docid": "78d33d767f9eb15ef79a6d016ffcfb3a",
"text": "Healthcare scientific applications, such as body area network, require of deploying hundreds of interconnected sensors to monitor the health status of a host. One of the biggest challenges is the streaming data collected by all those sensors, which needs to be processed in real time. Follow-up data analysis would normally involve moving the collected big data to a cloud data center for status reporting and record tracking purpose. Therefore, an efficient cloud platform with very elastic scaling capacity is needed to support such kind of real time streaming data applications. The current cloud platform either lacks of such a module to process streaming data, or scales in regard to coarse-grained compute nodes. In this paper, we propose a task-level adaptive MapReduce framework. This framework extends the generic MapReduce architecture by designing each Map and Reduce task as a consistent running loop daemon. The beauty of this new framework is the scaling capability being designed at the Map and Task level, rather than being scaled from the compute-node level. This strategy is capable of not only scaling up and down in real time, but also leading to effective use of compute resources in cloud data center. As a first step towards implementing this framework in real cloud, we developed a simulator that captures workload strength, and provisions the amount of Map and Reduce tasks just in need and in real time. To further enhance the framework, we applied two streaming data workload prediction methods, smoothing and Kalman filter, to estimate the unknown workload characteristics. We see 63.1% performance improvement by using the Kalman filter method to predict the workload. We also use real streaming data workload trace to test the framework. Experimental results show that this framework schedules the Map and Reduce tasks very efficiently, as the streaming data changes its arrival rate. © 2014 Elsevier B.V. All rights reserved. ∗ Corresponding author at: Kavli Institute for Astrophysics and Space Research, Massachusetts Institute of Technology, Cambridge, MA 02139, USA. Tel.: +1",
"title": ""
},
{
"docid": "268087f94c1d5183fe8bdf6360280fab",
"text": "Big Data is a new term used to identify datasets that we cannot manage with current methodologies or data mining software tools due to their large size and complexity. Big Data mining is the capability of extracting useful information from these large datasets or streams of data. New mining techniques are necessary due to the volume, variability, and velocity, of such data. MOA is a software framework with classification, regression, and frequent pattern methods, and the new APACHE SAMOA is a distributed streaming software for mining data streams.",
"title": ""
},
{
"docid": "c72a2e504934580f9542a62b7037cdd4",
"text": "Software defect prediction is one of the most active research areas in software engineering. We can build a prediction model with defect data collected from a software project and predict defects in the same project, i.e. within-project defect prediction (WPDP). Researchers also proposed cross-project defect prediction (CPDP) to predict defects for new projects lacking in defect data by using prediction models built by other projects. In recent studies, CPDP is proved to be feasible. However, CPDP requires projects that have the same metric set, meaning the metric sets should be identical between projects. As a result, current techniques for CPDP are difficult to apply across projects with heterogeneous metric sets. To address the limitation, we propose heterogeneous defect prediction (HDP) to predict defects across projects with heterogeneous metric sets. Our HDP approach conducts metric selection and metric matching to build a prediction model between projects with heterogeneous metric sets. Our empirical study on 28 subjects shows that about 68% of predictions using our approach outperform or are comparable to WPDP with statistical significance.",
"title": ""
},
{
"docid": "90aeccd6d6f94c668ed6cf5d3cc11298",
"text": "We develop a computational model for binocular stereopsis, attempting to explain the process by which the information detailing the 3-D geometry of object surfaces is encoded in a pair of stereo images. We design our model within a Bayesian framework, making explicit all of our assumptions about the nature of image coding and the structure of the world. We start by deriving our model for image formation, introducing a definition of half-occluded regions and deriving simple equations relating these regions to the disparity function. We show that the disparity function alone contains enough information to determine the half-occluded regions. We use these relations to derive a model for image formation in which the half-occluded regions are explicitly represented and computed. Next, we present our prior model in a series of three stages, or “worlds,” where each world considers an additional complication to the prior. We eventually argue that the prior model must be constructed from all of the local quantities in the scene geometry-i.e., depth, surface orientation, object boundaries, and surface creases. In addition, we present a new dynamic programming strategy for estimating these quantities. Throughout the article, we provide motivation for the development of our model by psychophysical examinations of the human visual system.",
"title": ""
},
{
"docid": "8dbe7ed9d801c7c39d583de6ebef9908",
"text": "We propose a novel approach for content based color image classification using Support Vector Machine (SVM). Traditional classification approaches deal poorly on content based image classification tasks being one of the reasons of high dimensionality of the feature space. In this paper, color image classification is done on features extracted from histograms of color components. The benefit of using color image histograms are better efficiency, and insensitivity to small changes in camera view-point i.e. translation and rotation. As a case study for validation purpose, experimental trials were done on a database of about 500 images divided into four different classes has been reported and compared on histogram features for RGB, CMYK, Lab, YUV, YCBCR, HSV, HVC and YIQ color spaces. Results based on the proposed approach are found encouraging in terms of color image classification accuracy.",
"title": ""
},
{
"docid": "6a68cf6f5503c5253b6035a11888ca15",
"text": "A method is developed that processes Global Navigation Satellite System (GNSS) beat carrier phase measurements from a single moving antenna in order to determine whether the GNSS signals are being spoofed. This technique allows a specially equipped GNSS receiver to detect sophisticated spoofing that cannot be detected using receiver autonomous integrity monitoring techniques. It works for both encrypted military signals and for unencrypted civilian signals. It does not require changes to the signal structure of unencrypted civilian GNSS signals. The method uses a short segment of beat carrier-phase time histories that are collected while the receiver's single antenna is undergoing a known, highfrequency motion profile, typically one pre-programmed into an antenna articulation system. The antenna also can be moving in an unknown way at lower frequencies, as might be the case if it were mounted on a ground vehicle, a ship, an airplane, or a spacecraft. The spoofing detection algorithm correlates high-pass-filtered versions of the known motion component with high-pass-filtered versions of the carrier phase variations. True signals produce a specific correlation pattern, and spoofed signals produce a recognizably different correlation pattern if the spoofer transmits its false signals from a single antenna. The most pronounced difference is that non-spoofed signals display variations between the beat carrier phase responses of multiple signals, but all signals' responses are identical in the spoofed case. These differing correlation characteristics are used to develop a hypothesis test in order to detect a spoofing attack or the lack thereof. For moving-base receivers, there is no need for prior knowledge of the vehicle's attitude. Instead, the detection calculations also provide a rough attitude measurement. Several versions of this spoofing detection system have been designed and tested. Some have been tested only with truth-model data, but one has been tested with actual live-signal data from the Global Positioning System (GPS) C/A code on the L1 frequency. The livedata tests correctly identified spoofing attacks in the 4 cases out of 8 trials that had actual attacks. These detections used worst-case false-alarm probabilities of 10 , and their worst-case probabilities of missed detection were no greater than 1.6x10. The ranges of antenna motion used to detect spoofing in these trials were between 4 and 6 cm, i.e., on the order of a quarter-cycle of the GPS L1 carrier wavelength.",
"title": ""
},
{
"docid": "2df35b05a40a646ba6f826503955601a",
"text": "This paper describes a new prototype system for detecting the demeanor of patients in emergency situations using the Intel RealSense camera system [1]. It describes how machine learning, a support vector machine (SVM) and the RealSense facial detection system can be used to track patient demeanour for pain monitoring. In a lab setting, the application has been trained to detect four different intensities of pain and provide demeanour information about the patient's eyes, mouth, and agitation state. Its utility as a basis for evaluating the condition of patients in situations using video, machine learning and 5G technology is discussed.",
"title": ""
},
{
"docid": "b191b9829aac1c1e74022c33e2488bbd",
"text": "We investigated the normal and parallel ground reaction forces during downhill and uphill running. Our rationale was that these force data would aid in the understanding of hill running injuries and energetics. Based on a simple spring-mass model, we hypothesized that the normal force peaks, both impact and active, would increase during downhill running and decrease during uphill running. We anticipated that the parallel braking force peaks would increase during downhill running and the parallel propulsive force peaks would increase during uphill running. But, we could not predict the magnitude of these changes. Five male and five female subjects ran at 3m/s on a force treadmill mounted on the level and on 3 degrees, 6 degrees, and 9 degrees wedges. During downhill running, normal impact force peaks and parallel braking force peaks were larger compared to the level. At -9 degrees, the normal impact force peaks increased by 54%, and the parallel braking force peaks increased by 73%. During uphill running, normal impact force peaks were smaller and parallel propulsive force peaks were larger compared to the level. At +9 degrees, normal impact force peaks were absent, and parallel propulsive peaks increased by 75%. Neither downhill nor uphill running affected normal active force peaks. Combined with previous biomechanics studies, our normal impact force data suggest that downhill running substantially increases the probability of overuse running injury. Our parallel force data provide insight into past energetic studies, which show that the metabolic cost increases during downhill running at steep angles.",
"title": ""
},
{
"docid": "477769b83e70f1d46062518b1d692664",
"text": "Deep Neural Networks (DNNs) have been demonstrated to perform exceptionally well on most recognition tasks such as image classification and segmentation. However, they have also been shown to be vulnerable to adversarial examples. This phenomenon has recently attracted a lot of attention but it has not been extensively studied on multiple, large-scale datasets and complex tasks such as semantic segmentation which often require more specialised networks with additional components such as CRFs, dilated convolutions, skip-connections and multiscale processing. In this paper, we present what to our knowledge is the first rigorous evaluation of adversarial attacks on modern semantic segmentation models, using two large-scale datasets. We analyse the effect of different network architectures, model capacity and multiscale processing, and show that many observations made on the task of classification do not always transfer to this more complex task. Furthermore, we show how mean-field inference in deep structured models and multiscale processing naturally implement recently proposed adversarial defenses. Our observations will aid future efforts in understanding and defending against adversarial examples. Moreover, in the shorter term, we show which segmentation models should currently be preferred in safety-critical applications due to their inherent robustness.",
"title": ""
},
{
"docid": "cb1952a4931955856c6479d7054c57e7",
"text": "This paper presents a static race detection analysis for multithreaded Java programs. Our analysis is based on a formal type system that is capable of capturing many common synchronization patterns. These patterns include classes with internal synchronization, classes thatrequire client-side synchronization, and thread-local classes. Experience checking over 40,000 lines of Java code with the type system demonstrates that it is an effective approach for eliminating races conditions. On large examples, fewer than 20 additional type annotations per 1000 lines of code were required by the type checker, and we found a number of races in the standard Java libraries and other test programs.",
"title": ""
},
{
"docid": "d59e64c1865193db3aaecc202f688690",
"text": "Event-related desynchronization/synchronization patterns during right/left motor imagery (MI) are effective features for an electroencephalogram-based brain-computer interface (BCI). As MI tasks are subject-specific, selection of subject-specific discriminative frequency components play a vital role in distinguishing these patterns. This paper proposes a new discriminative filter bank (FB) common spatial pattern algorithm to extract subject-specific FB for MI classification. The proposed method enhances the classification accuracy in BCI competition III dataset IVa and competition IV dataset IIb. Compared to the performance offered by the existing FB-based method, the proposed algorithm offers error rate reductions of 17.42% and 8.9% for BCI competition datasets III and IV, respectively.",
"title": ""
},
{
"docid": "6932912b1b880014b8eb2d1b796d7a91",
"text": "The ability to identify authors of computer programs based on their coding style is a direct threat to the privacy and anonymity of programmers. While recent work found that source code can be attributed to authors with high accuracy, attribution of executable binaries appears to be much more difficult. Many distinguishing features present in source code, e.g. variable names, are removed in the compilation process, and compiler optimization may alter the structure of a program, further obscuring features that are known to be useful in determining authorship. We examine programmer de-anonymization from the standpoint of machine learning, using a novel set of features that include ones obtained by decompiling the executable binary to source code. We adapt a powerful set of techniques from the domain of source code authorship attribution along with stylistic representations embedded in assembly, resulting in successful deanonymization of a large set of programmers. We evaluate our approach on data from the Google Code Jam, obtaining attribution accuracy of up to 96% with 100 and 83% with 600 candidate programmers. We present an executable binary authorship attribution approach, for the first time, that is robust to basic obfuscations, a range of compiler optimization settings, and binaries that have been stripped of their symbol tables. We perform programmer de-anonymization using both obfuscated binaries, and real-world code found “in the wild” in single-author GitHub repositories and the recently leaked Nulled.IO hacker forum. We show that programmers who would like to remain anonymous need to take extreme countermeasures to protect their privacy.",
"title": ""
},
{
"docid": "3415fb5e9b994d6015a17327fc0fe4f4",
"text": "A human stress monitoring patch integrates three sensors of skin temperature, skin conductance, and pulsewave in the size of stamp (25 mm × 15 mm × 72 μm) in order to enhance wearing comfort with small skin contact area and high flexibility. The skin contact area is minimized through the invention of an integrated multi-layer structure and the associated microfabrication process; thus being reduced to 1/125 of that of the conventional single-layer multiple sensors. The patch flexibility is increased mainly by the development of flexible pulsewave sensor, made of a flexible piezoelectric membrane supported by a perforated polyimide membrane. In the human physiological range, the fabricated stress patch measures skin temperature with the sensitivity of 0.31 Ω/°C, skin conductance with the sensitivity of 0.28 μV/0.02 μS, and pulse wave with the response time of 70 msec. The skin-attachable stress patch, capable to detect multimodal bio-signals, shows potential for application to wearable emotion monitoring.",
"title": ""
},
{
"docid": "1389e232bef9499c301fa4f4bbcb3e56",
"text": "PURPOSE\nTo review studies of healing touch and its implications for practice and research.\n\n\nDESIGN\nA review of the literature from published works, abstracts from conference proceedings, theses, and dissertations was conducted to synthesize information on healing touch. Works available until June 2003 were referenced.\n\n\nMETHODS\nThe studies were categorized by target of interventions and outcomes were evaluated.\n\n\nFINDINGS AND CONCLUSIONS\nOver 30 studies have been conducted with healing touch as the independent variable. Although no generalizable results were found, a foundation exists for further research to test its benefits.",
"title": ""
}
] | scidocsrr |
0055f77f1266c96c41d00c41c17015df | Query Rewriting for Horn-SHIQ Plus Rules | [
{
"docid": "205a5a9a61b6ac992f01c8c2fc09678a",
"text": "We present the OWL API, a high level Application Programming Interface (API) for working with OWL ontologies. The OWL API is closely aligned with the OWL 2 structural specification. It supports parsing and rendering in the syntaxes defined in the W3C specification (Functional Syntax, RDF/XML, OWL/XML and the Manchester OWL Syntax); manipulation of ontological structures; and the use of reasoning engines. The reference implementation of the OWL API, written in Java, includes validators for the various OWL 2 profiles OWL 2 QL, OWL 2 EL and OWL 2 RL. The OWL API has widespread usage in a variety of tools and applications.",
"title": ""
},
{
"docid": "de53086ad6d2f3a2c69aa37dde35bee7",
"text": "Towards the integration of rules and ontologies in the Semantic Web, we propose a combination of logic programming under the answer set semantics with the description logics SHIF(D) and SHOIN (D), which underly the Web ontology languages OWL Lite and OWL DL, respectively. This combination allows for building rules on top of ontologies but also, to a limited extent, building ontologies on top of rules. We introduce description logic programs (dl-programs), which consist of a description logic knowledge base L and a finite set of description logic rules (dl-rules) P . Such rules are similar to usual rules in logic programs with negation as failure, but may also contain queries to L, possibly default-negated, in their bodies. We define Herbrand models for dl-programs, and show that satisfiable positive dl-programs have a unique least Herbrand model. More generally, consistent stratified dl-programs can be associated with a unique minimal Herbrand model that is characterized through iterative least Herbrand models. We then generalize the (unique) minimal Herbrand model semantics for positive and stratified dl-programs to a strong answer set semantics for all dl-programs, which is based on a reduction to the least model semantics of positive dl-programs. We also define a weak answer set semantics based on a reduction to the answer sets of ordinary logic programs. Strong answer sets are weak answer sets, and both properly generalize answer sets of ordinary normal logic programs. We then give fixpoint characterizations for the (unique) minimal Herbrand model semantics of positive and stratified dl-programs, and show how to compute these models by finite fixpoint iterations. Furthermore, we give a precise picture of the complexity of deciding strong and weak answer set existence for a dl-program. 1Institut für Informationssysteme, Technische Universität Wien, Favoritenstraße 9-11, A-1040 Vienna, Austria; e-mail: {eiter, lukasiewicz, roman, tompits}@kr.tuwien.ac.at. 2Dipartimento di Informatica e Sistemistica, Università di Roma “La Sapienza”, Via Salaria 113, I-00198 Rome, Italy; e-mail: [email protected]. Acknowledgements: This work has been partially supported by the Austrian Science Fund project Z29N04 and a Marie Curie Individual Fellowship of the European Community programme “Human Potential” under contract number HPMF-CT-2001-001286 (disclaimer: The authors are solely responsible for information communicated and the European Commission is not responsible for any views or results expressed). We would like to thank Ian Horrocks and Ulrike Sattler for providing valuable information on complexityrelated issues during the preparation of this paper. Copyright c © 2004 by the authors INFSYS RR 1843-03-13 I",
"title": ""
}
] | [
{
"docid": "b2deb2c8ca5d03a2bd4651846c5a6d7c",
"text": "With the increasing user demand for elastic provisioning of resources coupled with ubiquitous and on-demand access to data, cloud computing has been recognized as an emerging technology to meet such dynamic user demands. In addition, with the introduction and rising use of mobile devices, the Internet of Things (IoT) has recently received considerable attention since the IoT has brought physical devices and connected them to the Internet, enabling each device to share data with surrounding devices and virtualized technologies in real-time. Consequently, the exploding data usage requires a new, innovative computing platform that can provide robust real-time data analytics and resource provisioning to clients. As a result, fog computing has recently been introduced to provide computation, storage and networking services between the end-users and traditional cloud computing data centers. This paper proposes a policy-based management of resources in fog computing, expanding the current fog computing platform to support secure collaboration and interoperability between different user-requested resources in fog computing.",
"title": ""
},
{
"docid": "7b6cf139cae3e9dae8a2886ddabcfef0",
"text": "An enhanced automated material handling system (AMHS) that uses a local FOUP buffer at each tool is presented as a method of enabling lot size reduction and parallel metrology sampling in the photolithography (litho) bay. The local FOUP buffers can be integrated with current OHT AMHS systems in existing fabs with little or no change to the AMHS or process equipment. The local buffers enhance the effectiveness of the OHT by eliminating intermediate moves to stockers, increasing the move rate capacity by 15-20%, and decreasing the loadport exchange time to 30 seconds. These enhancements can enable the AMHS to achieve the high move rates compatible with lot size reduction down to 12-15 wafers per FOUP. The implementation of such a system in a photolithography bay could result in a 60-74% reduction in metrology delay time, which is the time between wafer exposure at a litho tool and collection of metrology and inspection data.",
"title": ""
},
{
"docid": "4cf6a69833d7e553f0818aa72c99c938",
"text": "Work on the semantics of questions has argued that the relation between a question and its answer(s) can be cast in terms of logical entailment. In this paper, we demonstrate how computational systems designed to recognize textual entailment can be used to enhance the accuracy of current open-domain automatic question answering (Q/A) systems. In our experiments, we show that when textual entailment information is used to either filter or rank answers returned by a Q/A system, accuracy can be increased by as much as 20% overall.",
"title": ""
},
{
"docid": "1011879d0447a1e1ce2bd9b449daf15b",
"text": "Coreless substrates have been used in more and more advanced package designs for their benefits in electrical performance and reduction in thickness. However, coreless substrate causes severe package warpage due to the lack of a rigid and low CTE core. In this paper, both experimental measured warpage data and model simulation data are presented and illustrate that asymmetric designs in substrate thickness direction are capable of improving package warpage when compared to the traditional symmetric design. A few asymmetric design options are proposed, including Cu layer thickness asymmetric design, dielectric layer thickness asymmetric design and dielectric material property asymmetric design. These design options are then studied in depth by simulation to understand their mechanism and quantify their effectiveness for warpage improvement. From the results, it is found that the dielectric material property asymmetric design is the most effective option to improve package warpage, especially when using a lower CTE dielectric in the bottom layers of the substrate and a high CTE dielectric in top layers. Cu layer thickness asymmetric design is another effective way for warpage reduction. The bottom Cu layers should be thinner than the top Cu layers. It is also found that the dielectric layer thickness asymmetric design is only effective for high layer count substrate. It is not effective for low layer count substrate. In this approach, the bottom dielectric layers should be thicker than the top dielectric layers. Furthermore, the results show the asymmetric substrate designs are usually more effective for warpage improvement at high temperature than at room temperature. They are also more effective for a high layer count substrate than a low layer count substrate.",
"title": ""
},
{
"docid": "9dd75e407c25d46aa0eb303a948985b1",
"text": "Being a corner stone of the New testament and Christian religion, the evangelical narration about Jesus Christ crucifixion had been drawing attention of many millions people, both Christians and representatives of other religions and convictions, almost for two thousand years.If in the last centuries the crucifixion was considered mainly from theological and historical positions, the XX century was marked by surge of medical and biological researches devoted to investigation of thanatogenesis of the crucifixion. However the careful analysis of the suggested concepts of death at the crucifixion shows that not all of them are well-founded. Moreover, some authors sometimes do not consider available historic facts.Not only the analysis of the original Greek text of the Gospel is absent in the published works but authors ignore the Gospel itself at times.",
"title": ""
},
{
"docid": "cb9a54b8eeb6ca14bdbdf8ee3faa8bdb",
"text": "The problem of auto-focusing has been studied for long, but most techniques found in literature do not always work well for low-contrast images. In this paper, a robust focus measure based on the energy of the image is proposed. It performs equally well on ordinary and low-contrast images. In addition, it is computationally efficient.",
"title": ""
},
{
"docid": "a0b8475e0f50bc603d2280c4dcea8c0f",
"text": "We provide data on the extent to which computer-related audit procedures are used and whether two factors, control risk assessment and audit firm size, influence computer-related audit procedures use. We used a field-based questionnaire to collect data from 181 auditors representing Big 4, national, regional, and local firms. Results indicate that computer-related audit procedures are generally used when obtaining an understanding of the client system and business processes and testing computer controls. Furthermore, 42.9 percent of participants indicate that they relied on internal controls; however, this percentage increases significantly for auditors at Big 4 firms. Finally, our results raise questions for future research regarding computer-related audit procedure use.",
"title": ""
},
{
"docid": "aa1a97f8f6f9f1c2627f63e1ec13e8cf",
"text": "In this paper, we review recent emerging theoretical and technological advances of artificial intelligence (AI) in the big data settings. We conclude that integrating data-driven machine learning with human knowledge (common priors or implicit intuitions) can effectively lead to explainable, robust, and general AI, as follows: from shallow computation to deep neural reasoning; from merely data-driven model to data-driven with structured logic rules models; from task-oriented (domain-specific) intelligence (adherence to explicit instructions) to artificial general intelligence in a general context (the capability to learn from experience). Motivated by such endeavors, the next generation of AI, namely AI 2.0, is positioned to reinvent computing itself, to transform big data into structured knowledge, and to enable better decision-making for our society.",
"title": ""
},
{
"docid": "3205d04f2f5648397ee1524b682ad938",
"text": "Sequential models achieve state-of-the-art results in audio, visual and textual domains with respect to both estimating the data distribution and generating high-quality samples. Efficient sampling for this class of models has however remained an elusive problem. With a focus on text-to-speech synthesis, we describe a set of general techniques for reducing sampling time while maintaining high output quality. We first describe a single-layer recurrent neural network, the WaveRNN, with a dual softmax layer that matches the quality of the state-of-the-art WaveNet model. The compact form of the network makes it possible to generate 24 kHz 16-bit audio 4× faster than real time on a GPU. Second, we apply a weight pruning technique to reduce the number of weights in the WaveRNN. We find that, for a constant number of parameters, large sparse networks perform better than small dense networks and this relationship holds for sparsity levels beyond 96%. The small number of weights in a Sparse WaveRNN makes it possible to sample high-fidelity audio on a mobile CPU in real time. Finally, we propose a new generation scheme based on subscaling that folds a long sequence into a batch of shorter sequences and allows one to generate multiple samples at once. The Subscale WaveRNN produces 16 samples per step without loss of quality and offers an orthogonal method for increasing sampling efficiency.",
"title": ""
},
{
"docid": "8470245ef870eb5246d65fa3eb1e760a",
"text": "Educational spaces play an important role in enhancing learning productivity levels of society people as the most important places to human train. Considering the cost, time and energy spending on these spaces, trying to design efficient and optimized environment is a necessity. Achieving efficient environments requires changing environmental criteria so that they can have a positive impact on the activities and learning in users. Therefore, creating suitable conditions for promoting learning in users requires full utilization of the comprehensive knowledge of architecture and the design of the physical environment with respect to the environmental, social and aesthetic dimensions; Which will naturally increase the usefulness of people in space and make optimal use of the expenses spent on building schools and the time spent on education and training.The main aim of this study was to find physical variables affecting on increasing productivity in learning environments. This study is quantitative-qualitative and was done in two research methods: a) survey research methods (survey) b) correlation method. The samples were teachers and students in secondary schools’ in Zahedan city, the sample size was 310 people. Variables were extracted using the literature review and deep interviews with professors and experts. The questionnaire was obtained using variables and it is used to collect the views of teachers and students. Cronbach’s alpha coefficient was 0.89 which indicates that the information gathering tool is acceptable. The findings shows that there are four main physical factor as: 1. Physical comfort, 2. Space layouts, 3. Psychological factors and 4. Visual factors thet they are affecting positively on space productivity. Each of the environmental factors play an important role in improving the learning quality and increasing interest in attending learning environments; therefore, the desired environment improves the productivity of the educational spaces by improving the components of productivity.",
"title": ""
},
{
"docid": "ad004dd47449b977cd30f2454c5af77a",
"text": "Plants are a tremendous source for the discovery of new products of medicinal value for drug development. Today several distinct chemicals derived from plants are important drugs currently used in one or more countries in the world. Many of the drugs sold today are simple synthetic modifications or copies of the naturally obtained substances. The evolving commercial importance of secondary metabolites has in recent years resulted in a great interest in secondary metabolism, particularly in the possibility of altering the production of bioactive plant metabolites by means of tissue culture technology. Plant cell culture technologies were introduced at the end of the 1960’s as a possible tool for both studying and producing plant secondary metabolites. Different strategies, using an in vitro system, have been extensively studied to improve the production of plant chemicals. The focus of the present review is the application of tissue culture technology for the production of some important plant pharmaceuticals. Also, we describe the results of in vitro cultures and production of some important secondary metabolites obtained in our laboratory.",
"title": ""
},
{
"docid": "b89d42f836730a782a9b0f5df5bbd5bd",
"text": "This paper proposes a new usability evaluation checklist, UseLearn, and a related method for eLearning systems. UseLearn is a comprehensive checklist which incorporates both quality and usability evaluation perspectives in eLearning systems. Structural equation modeling is deployed to validate the UseLearn checklist quantitatively. The experimental results show that the UseLearn method supports the determination of usability problems by criticality metric analysis and the definition of relevant improvement strategies. The main advantage of the UseLearn method is the adaptive selection of the most influential usability problems, and thus significant reduction of the time and effort for usability evaluation can be achieved. At the sketching and/or design stage of eLearning systems, it will provide an effective guidance to usability analysts as to what problems should be focused on in order to improve the usability perception of the end-users. Relevance to industry: During the sketching or design stage of eLearning platforms, usability problems should be revealed and eradicated to create more usable and quality eLearning systems to satisfy the end-users. The UseLearn checklist along with its quantitative methodology proposed in this study would be helpful for usability experts to achieve this goal. 2010 Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "1768ecf6a2d8a42ea701d7f242edb472",
"text": "Satisfaction prediction is one of the prime concerns in search performance evaluation. It is a non-trivial task for two major reasons: (1) The definition of satisfaction is rather subjective and different users may have different opinions in satisfaction judgement. (2) Most existing studies on satisfaction prediction mainly rely on users' click-through or query reformulation behaviors but there are many sessions without such kind of interactions. To shed light on these research questions, we construct an experimental search engine that could collect users' satisfaction feedback as well as mouse click-through/movement data. Different from existing studies, we compare for the first time search users' and external assessors' opinions on satisfaction. We find that search users pay more attention to the utility of results while external assessors emphasize on the efforts spent in search sessions. Inspired by recent studies in predicting result relevance based on mouse movement patterns (namely motifs), we propose to estimate the utilities of search results and the efforts in search sessions with motifs extracted from mouse movement data on search result pages (SERPs). Besides the existing frequency-based motif selection method, two novel selection strategies (distance-based and distribution-based) are also adopted to extract high quality motifs for satisfaction prediction. Experimental results on over 1,000 user sessions show that the proposed strategies outperform existing methods and also have promising generalization capability for different users and queries.",
"title": ""
},
{
"docid": "8574612823cccbb5f8bcc80532dae74e",
"text": "The decentralized cryptocurrency Bitcoin has experienced great success but also encountered many challenges. One of the challenges has been the long confirmation time and low transaction throughput. Another challenge is the lack of incentives at certain steps of the protocol, raising concerns for transaction withholding, selfish mining, etc. To address these challenges, we propose Solidus, a decentralized cryptocurrency based on permissionless Byzantine consensus. A core technique in Solidus is to use proof of work for leader election to adapt the Practical Byzantine Fault Tolerance (PBFT) protocol to a permissionless setting. We also design Solidus to be incentive compatible and to mitigate selfish mining. Solidus improves on Bitcoin in confirmation time, and provides safety and liveness assuming Byzantine players and the largest coalition of rational players collectively control less than one-third of the computation power.",
"title": ""
},
{
"docid": "4f2112175c5d8175c5c0f8cb4d9185a2",
"text": "It is difficult to fully assess the quality of software inhouse, outside the actual time and context in which it will execute after deployment. As a result, it is common for software to manifest field failures, failures that occur on user machines due to untested behavior. Field failures are typically difficult to recreate and investigate on developer platforms, and existing techniques based on crash reporting provide only limited support for this task. In this paper, we present a technique for recording, reproducing, and minimizing failing executions that enables and supports inhouse debugging of field failures. We also present a tool that implements our technique and an empirical study that evaluates the technique on a widely used e-mail client.",
"title": ""
},
{
"docid": "aeb039a1e5ae76bf8e928e6b8cbfdf7f",
"text": "ZHENG, Traditional Chinese Medicine syndrome, is an integral and essential part of Traditional Chinese Medicine theory. It defines the theoretical abstraction of the symptom profiles of individual patients and thus, used as a guideline in disease classification in Chinese medicine. For example, patients suffering from gastritis may be classified as Cold or Hot ZHENG, whereas patients with different diseases may be classified under the same ZHENG. Tongue appearance is a valuable diagnostic tool for determining ZHENG in patients. In this paper, we explore new modalities for the clinical characterization of ZHENG using various supervised machine learning algorithms. We propose a novel-color-space-based feature set, which can be extracted from tongue images of clinical patients to build an automated ZHENG classification system. Given that Chinese medical practitioners usually observe the tongue color and coating to determine a ZHENG type and to diagnose different stomach disorders including gastritis, we propose using machine-learning techniques to establish the relationship between the tongue image features and ZHENG by learning through examples. The experimental results obtained over a set of 263 gastritis patients, most of whom suffering Cold Zheng or Hot ZHENG, and a control group of 48 healthy volunteers demonstrate an excellent performance of our proposed system.",
"title": ""
},
{
"docid": "9d7f80a70838f2ea9962de95e2b71827",
"text": "In this paper, one new machine family, i.e. named as flux-modulation machines which produce steady torque based on flux-modulation effect is proposed. The typical model including three components-one flux modulator, one armature and one excitation field exciters of flux-modulation machines is built. The torque relationships among the three components are developed based on the principle of electromechanical energy conversion. Then, some structure and performance features of flux-modulation machines are summarized, through which the flux-modulation topology distinguish criterion is proposed for the first time. Flux-modulation topologies can be further classified into stationary flux modulator, stationary excitation field, stationary armature field and dual-mechanical port flux-modulation machines. Many existed topologies, such as vernier, switched flux, flux reversal and transverse machines, are demonstrated that they can be classified into the flux-modulation family based on the criterion, and the processes how to convert typical models of flux-modulation machines to these machines are also given in this paper. Furthermore, in this new machine family, developed and developing theories on the vernier, switched flux, flux reversal and transverse machines can be shared with each other as well as some novel topologies in such a machine category. Based on the flux modulation principle, the nature and general theory, such as torque, power factor expressions and so on, of the flux-modulation machines are investigated. In additions, flux-modulation induction and electromagnetic transmission topologies are predicted and analyzed to enrich the flux-modulation electromagnetic topology family and the prospective applications are highlighted. Finally, one vernier permanent magnet prototype has been built and tested to verify the analysis results.",
"title": ""
},
{
"docid": "836eb904c483cd157807302997dd1aac",
"text": "Recent improvements in both the performance and scalability of shared-nothing, transactional, in-memory NewSQL databases have reopened the research question of whether distributed metadata for hierarchical file systems can be managed using commodity databases. In this paper, we introduce HopsFS, a next generation distribution of the Hadoop Distributed File System (HDFS) that replaces HDFS’ single node in-memory metadata service, with a distributed metadata service built on a NewSQL database. By removing the metadata bottleneck, HopsFS enables an order of magnitude larger and higher throughput clusters compared to HDFS. Metadata capacity has been increased to at least 37 times HDFS’ capacity, and in experiments based on a workload trace from Spotify, we show that HopsFS supports 16 to 37 times the throughput of Apache HDFS. HopsFS also has lower latency for many concurrent clients, and no downtime during failover. Finally, as metadata is now stored in a commodity database, it can be safely extended and easily exported to external systems for online analysis and free-text search.",
"title": ""
},
{
"docid": "a7135b1e6b9f5a506791915c2344c8b2",
"text": "There has been extensive research focusing on developing smart environments by integrating data mining techniques into environments that are equipped with sensors and actuators. The ultimate goal is to reduce the energy consumption in buildings while maintaining a maximum comfort level for occupants. However, there are few studies successfully demonstrating energy savings from occupancy behavioural patterns that have been learned in a smart environment because of a lack of a formal connection to building energy management systems. In this study, the objective is to develop and implement algorithms for sensor-based modelling and prediction of user behaviour in intelligent buildings and connect the behavioural patterns to building energy and comfort management systems through simulation tools. The results are tested on data from a room equipped with a distributed set of sensors, and building simulations through EnergyPlus suggest potential energy savings of 30% while maintaining an indoor comfort level when compared with other basic energy savings HVAC control strategies.",
"title": ""
},
{
"docid": "e7ac73f581ae7799021374ddd3e4d3a2",
"text": "Table: Coherence evaluation results on Discrimination and Insertion tasks. † indicates a neural model is significantly superior to its non-neural counterpart with p-value < 0.01. Discr. Ins. Acc F1 Random 50.00 50.00 12.60 Graph-based (G&S) 64.23 65.01 11.93 Dist. sentence (L&H) 77.54 77.54 19.32 Grid-all nouns (E&C) 81.58 81.60 22.13 Extended Grid (E&C) 84.95 84.95 23.28 Grid-CNN 85.57† 85.57† 23.12 Extended Grid-CNN 88.69† 88.69† 25.95†",
"title": ""
}
] | scidocsrr |
9e58339871e793e5dc575d904a500a42 | A Continuously Growing Dataset of Sentential Paraphrases | [
{
"docid": "3b0b6075cf6cdb13d592b54b85cdf4af",
"text": "We address the problem of sentence alignment for monolingual corpora, a phenomenon distinct from alignment in parallel corpora. Aligning large comparable corpora automatically would provide a valuable resource for learning of text-totext rewriting rules. We incorporate context into the search for an optimal alignment in two complementary ways: learning rules for matching paragraphs using topic structure and further refining the matching through local alignment to find good sentence pairs. Evaluation shows that our alignment method outperforms state-of-the-art systems developed for the same task.",
"title": ""
}
] | [
{
"docid": "17bd801e028d168795620b590bb8cfce",
"text": "Video shot boundary detection (SBD) is the first and essential step for content-based video management and structural analysis. Great efforts have been paid to develop SBD algorithms for years. However, the high computational cost in the SBD becomes a block for further applications such as video indexing, browsing, retrieval, and representation. Motivated by the requirement of the real-time interactive applications, a unified fast SBD scheme is proposed in this paper. We adopted a candidate segment selection and singular value decomposition (SVD) to speed up the SBD. Initially, the positions of the shot boundaries and lengths of gradual transitions are predicted using adaptive thresholds and most non-boundary frames are discarded at the same time. Only the candidate segments that may contain the shot boundaries are preserved for further detection. Then, for all frames in each candidate segment, their color histograms in the hue-saturation-value) space are extracted, forming a frame-feature matrix. The SVD is then performed on the frame-feature matrices of all candidate segments to reduce the feature dimension. The refined feature vector of each frame in the candidate segments is obtained as a new metric for boundary detection. Finally, cut and gradual transitions are identified using our pattern matching method based on a new similarity measurement. Experiments on TRECVID 2001 test data and other video materials show that the proposed scheme can achieve a high detection speed and excellent accuracy compared with recent SBD schemes.",
"title": ""
},
{
"docid": "315c7a7a37a69adb90f5059693c75d93",
"text": "This paper provides a comprehensive study of interleave-division multiple-access (IDMA) systems. The IDMA receiver principles for different modulation and channel conditions are outlined. A semi-analytical technique is developed based on the density evolution technique to estimate the bit-error-rate (BER) of the system. It provides a fast and relatively accurate method to predict the performance of the IDMA scheme. With simple convolutional/repetition codes, overall throughputs of 3 bits/chip with one receive antenna and 6 bits/chip with two receive antennas are observed for IDMA systems involving as many as about 100 users.",
"title": ""
},
{
"docid": "e60bba00c770afdb9d3a971dba1b1508",
"text": "Theories of insight problems are often tested by formulating hypotheses about the particular difficulties of individual insight problems. Such evaluations often implicitly assume that there is a single difficulty. We argue that the quantitatively small effects of many studies arise because the difficulty of many insight problems is determined by multiple factors, so the removal of 1 factor has limited effect on the solution rate. Difficulties can reside either in problem perception, in prior knowledge, or in the processing of the problem information. We support this multiple factors perspective through 3 experiments on the 9-dot problem (N.R.F. Maier, 1930). Our results lead to a significant reformulation of the classical hypothesis as to why this problem is difficult. The results have general implications for our understanding of insight problem solving and for the interpretation of data from studies that aim to evaluate hypotheses about the sources of difficulty of particular insight problems.",
"title": ""
},
{
"docid": "cdd3c529e1f934839444f054ecc93319",
"text": "Flow visualization has been a very attractive component of scientific visualization research for a long time. Usually very large multivariate datasets require processing. These datasets often consist of a large number of sample locations and several time steps. The steadily increasing performance of computers has recently become a driving factor for a reemergence in flow visualization research, especially in texture-based techniques. In this paper, dense, texture-based flow visualization techniques are discussed. This class of techniques attempts to provide a complete, dense representation of the flow field with high spatio-temporal coherency. An attempt of categorizing closely related solutions is incorporated and presented. Fundamentals are shortly addressed as well as advantages and disadvantages of the methods.",
"title": ""
},
{
"docid": "2248c955d3fd7d8119fde48560db1962",
"text": "Requirements engineering is concerned with the identification of high-level goals to be achieved by the system envisioned, the refinement of such goals, the operationalization of goals into services and constraints, and the assignment of responsibilities for the resulting requirements to agents such as humans, devices and programs. Goal refinement and operationalization is a complex process which is not well supported by current requirements engineering technology. Ideally some form of formal support should be provided, but formal methods are difficult and costly to apply at this stage.This paper presents an approach to goal refinement and operationalization which is aimed at providing constructive formal support while hiding the underlying mathematics. The principle is to reuse generic refinement patterns from a library structured according to strengthening/weakening relationships among patterns. The patterns are once for all proved correct and complete. They can be used for guiding the refinement process or for pointing out missing elements in a refinement. The cost inherent to the use of a formal method is thus reduced significantly. Tactics are proposed to the requirements engineer for grounding pattern selection on semantic criteria.The approach is discussed in the context of the multi-paradigm language used in the KAOS method; this language has an external semantic net layer for capturing goals, constraints, agents, objects and actions together with their links, and an inner formal assertion layer that includes a real-time temporal logic for the specification of goals and constraints. Some frequent refinement patterns are high-lighted and illustrated through a variety of examples.The general principle is somewhat similar in spirit to the increasingly popular idea of design patterns, although it is grounded on a formal framework here.",
"title": ""
},
{
"docid": "910a3be33d479be4ed6e7e44a56bb8fb",
"text": "Support vector machine (SVM) is a supervised machine learning approach that was recognized as a statistical learning apotheosis for the small-sample database. SVM has shown its excellent learning and generalization ability and has been extensively employed in many areas. This paper presents a performance analysis of six types of SVMs for the diagnosis of the classical Wisconsin breast cancer problem from a statistical point of view. The classification performance of standard SVM (St-SVM) is analyzed and compared with those of the other modified classifiers such as proximal support vector machine (PSVM) classifiers, Lagrangian support vector machines (LSVM), finite Newton method for Lagrangian support vector machine (NSVM), Linear programming support vector machines (LPSVM), and smooth support vector machine (SSVM). The experimental results reveal that these SVM classifiers achieve very fast, simple, and efficient breast cancer diagnosis. The training results indicated that LSVM has the lowest accuracy of 95.6107 %, while St-SVM performed better than other methods for all performance indices (accuracy = 97.71 %) and is closely followed by LPSVM (accuracy = 97.3282). However, in the validation phase, the overall accuracies of LPSVM achieved 97.1429 %, which was superior to LSVM (95.4286 %), SSVM (96.5714 %), PSVM (96 %), NSVM (96.5714 %), and St-SVM (94.86 %). Value of ROC and MCC for LPSVM achieved 0.9938 and 0.9369, respectively, which outperformed other classifiers. The results strongly suggest that LPSVM can aid in the diagnosis of breast cancer.",
"title": ""
},
{
"docid": "f75a1e5c9268a3a64daa94bb9c7f522d",
"text": "Many natural language generation tasks, such as abstractive summarization and text simplification, are paraphrase-orientated. In these tasks, copying and rewriting are two main writing modes. Most previous sequence-to-sequence (Seq2Seq) models use a single decoder and neglect this fact. In this paper, we develop a novel Seq2Seq model to fuse a copying decoder and a restricted generative decoder. The copying decoder finds the position to be copied based on a typical attention model. The generative decoder produces words limited in the source-specific vocabulary. To combine the two decoders and determine the final output, we develop a predictor to predict the mode of copying or rewriting. This predictor can be guided by the actual writing mode in the training data. We conduct extensive experiments on two different paraphrase datasets. The result shows that our model outperforms the stateof-the-art approaches in terms of both informativeness and language quality.",
"title": ""
},
{
"docid": "bdb738a5df12bbd3862f0e5320856473",
"text": "The Extended Kalman Filter (EKF) has become a standard technique used in a number of nonlinear estimation and machine learning applications. These include estimating the state of a nonlinear dynamic system, estimating parameters for nonlinear system identification (e.g., learning the weights of a neural network), and dual estimation (e.g., the ExpectationMaximization (EM) algorithm)where both states and parameters are estimated simultaneously. This paper points out the flaws in using the EKF, and introduces an improvement, the Unscented Kalman Filter (UKF), proposed by Julier and Uhlman [5]. A central and vital operation performed in the Kalman Filter is the propagation of a Gaussian random variable (GRV) through the system dynamics. In the EKF, the state distribution is approximated by a GRV, which is then propagated analytically through the first-order linearization of the nonlinear system. This can introduce large errors in the true posterior mean and covariance of the transformed GRV, which may lead to sub-optimal performance and sometimes divergence of the filter. The UKF addresses this problem by using a deterministic sampling approach. The state distribution is again approximated by a GRV, but is now represented using a minimal set of carefully chosen sample points. These sample points completely capture the true mean and covariance of the GRV, and when propagated through the true nonlinear system, captures the posterior mean and covariance accurately to the 3rd order (Taylor series expansion) for any nonlinearity. The EKF, in contrast, only achieves first-order accuracy. Remarkably, the computational complexity of the UKF is the same order as that of the EKF. Julier and Uhlman demonstrated the substantial performance gains of the UKF in the context of state-estimation for nonlinear control. Machine learning problems were not considered. We extend the use of the UKF to a broader class of nonlinear estimation problems, including nonlinear system identification, training of neural networks, and dual estimation problems. Our preliminary results were presented in [13]. In this paper, the algorithms are further developed and illustrated with a number of additional examples. This work was sponsored by the NSF under grant grant IRI-9712346",
"title": ""
},
{
"docid": "899e3e436cdaed9efb66b7c9c296ea90",
"text": "Background estimation and foreground segmentation are important steps in many high-level vision tasks. Many existing methods estimate background as a low-rank component and foreground as a sparse matrix without incorporating the structural information. Therefore, these algorithms exhibit degraded performance in the presence of dynamic backgrounds, photometric variations, jitter, shadows, and large occlusions. We observe that these backgrounds often span multiple manifolds. Therefore, constraints that ensure continuity on those manifolds will result in better background estimation. Hence, we propose to incorporate the spatial and temporal sparse subspace clustering into the robust principal component analysis (RPCA) framework. To that end, we compute a spatial and temporal graph for a given sequence using motion-aware correlation coefficient. The information captured by both graphs is utilized by estimating the proximity matrices using both the normalized Euclidean and geodesic distances. The low-rank component must be able to efficiently partition the spatiotemporal graphs using these Laplacian matrices. Embedded with the RPCA objective function, these Laplacian matrices constrain the background model to be spatially and temporally consistent, both on linear and nonlinear manifolds. The solution of the proposed objective function is computed by using the linearized alternating direction method with adaptive penalty optimization scheme. Experiments are performed on challenging sequences from five publicly available datasets and are compared with the 23 existing state-of-the-art methods. The results demonstrate excellent performance of the proposed algorithm for both the background estimation and foreground segmentation.",
"title": ""
},
{
"docid": "853e1e7bc1585bf8cd87a3aeb3797f24",
"text": "Violent video game playing is correlated with aggression, but its relation to antisocial behavior in correctional and juvenile justice samples is largely unknown. Based on a data from a sample of institutionalized juvenile delinquents, behavioral and attitudinal measures relating to violent video game playing were associated with a composite measure of delinquency and a more specific measure of violent delinquency after controlling for the effects of screen time, years playing video games, age, sex, race, delinquency history, and psychopathic personality traits. Violent video games are associated with antisociality even in a clinical sample, and these effects withstand the robust influences of multiple correlates of juvenile delinquency and youth violence most notably psychopathy.",
"title": ""
},
{
"docid": "04e094e8f1e0466248df9c1263285f0b",
"text": "We propose a mathematical formulation for the notion of optimal projective cluster, starting from natural requirements on the density of points in subspaces. This allows us to develop a Monte Carlo algorithm for iteratively computing projective clusters. We prove that the computed clusters are good with high probability. We implemented a modified version of the algorithm, using heuristics to speed up computation. Our extensive experiments show that our method is significantly more accurate than previous approaches. In particular, we use our techniques to build a classifier for detecting rotated human faces in cluttered images.",
"title": ""
},
{
"docid": "0a143c2d4af3cc726964a90927556399",
"text": "Humans prefer to interact with each other using speech. Since this is the most natural mode of communication, the humans also want to interact with machines using speech only. So, automatic speech recognition has gained a lot of popularity. Different approaches for speech recognition exists like Hidden Markov Model (HMM), Dynamic Time Warping (DTW), Vector Quantization (VQ), etc. This paper uses Neural Network (NN) along with Mel Frequency Cepstrum Coefficients (MFCC) for speech recognition. Mel Frequency Cepstrum Coefiicients (MFCC) has been used for the feature extraction of speech. This gives the feature of the waveform. For pattern matching FeedForward Neural Network with Back propagation algorithm has been applied. The paper analyzes the various training algorithms present for training the Neural Network and uses train scg for the experiment. The work has been done on MATLAB and experimental results show that system is able to recognize words at sufficiently high accuracy.",
"title": ""
},
{
"docid": "ee4d5fae117d6af503ceb65707814c1b",
"text": "We investigate the use of syntactically related pairs of words for the task of text classification. The set of all pairs of syntactically related words should intuitively provide a better description of what a document is about, than the set of proximity-based N-grams or selective syntactic phrases. We generate syntactically related word pairs using a dependency parser. We experimented with Support Vector Machines and Decision Tree learners on the 10 most frequent classes from the Reuters-21578 corpus. Results show that syntactically related pairs of words produce better results in terms of accuracy and precision when used alone or combined with unigrams, compared to unigrams alone.",
"title": ""
},
{
"docid": "0678ff4994cfaf0f7b82da59172d453a",
"text": "INTRODUCTION\nPhysical training for United States military personnel requires a combination of injury prevention and performance optimization to counter unintentional musculoskeletal injuries and maximize warrior capabilities. Determining the most effective activities and tasks to meet these goals requires a systematic, research-based approach that is population specific based on the tasks and demands of the warrior.\n\n\nOBJECTIVE\nWe have modified the traditional approach to injury prevention to implement a comprehensive injury prevention and performance optimization research program with the 101st Airborne Division (Air Assault) at Ft. Campbell, KY. This is Part I of two papers that presents the research conducted during the first three steps of the program and includes Injury Surveillance, Task and Demand Analysis, and Predictors of Injury and Optimal Performance.\n\n\nMETHODS\nInjury surveillance based on a self-report of injuries was collected on all Soldiers participating in the study. Field-based analyses of the tasks and demands of Soldiers performing typical tasks of 101st Soldiers were performed to develop 101st-specific laboratory testing and to assist with the design of the intervention (Eagle Tactical Athlete Program (ETAP)). Laboratory testing of musculoskeletal, biomechanical, physiological, and nutritional characteristics was performed on Soldiers and benchmarked to triathletes to determine predictors of injury and optimal performance and to assist with the design of ETAP.\n\n\nRESULTS\nInjury surveillance demonstrated that Soldiers of the 101st are at risk for a wide range of preventable unintentional musculoskeletal injuries during physical training, tactical training, and recreational/sports activities. The field-based analyses provided quantitative data and qualitative information essential to guiding 101st specific laboratory testing and intervention design. Overall the laboratory testing revealed that Soldiers of the 101st would benefit from targeted physical training to meet the specific demands of their job and that sub-groups of Soldiers would benefit from targeted injury prevention activities.\n\n\nCONCLUSIONS\nThe first three steps of the injury prevention and performance research program revealed that Soldiers of the 101st suffer preventable musculoskeletal injuries, have unique physical demands, and would benefit from targeted training to improve performance and prevent injury.",
"title": ""
},
{
"docid": "c391b0cddadc4fb8dde78e453e501b57",
"text": "In this paper, we explore how privacy settings and privacy policy consumption (reading the privacy policy) affect the relationship between privacy attitudes and disclosure behaviors. We present results from a survey completed by 122 users of Facebook regarding their information disclosure practices and their attitudes about privacy. Based on our data, we develop and evaluate a model for understanding factors that affect how privacy attitudes influence disclosure and discuss implications for social network sites. Our analysis shows that the relationship between privacy attitudes and certain types of disclosures (those furthering contact) are controlled by privacy policy consumption and privacy behaviors. This provides evidence that social network sites could help mitigate concerns about disclosure by providing transparent privacy policies and privacy controls. 2010 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "11c7ceb4d63be002154cf162f635687c",
"text": "Inter-network interference is a significant source of difficulty for wireless body area networks. Movement, proximity and the lack of central coordination all contribute to this problem. We compare the interference power of multiple Body Area Network (BAN) devices when a group of people move randomly within an office area. We find that the path loss trend is dominated by local variations in the signal, and not free-space path loss exponent.",
"title": ""
},
{
"docid": "010fd9fcd9afb973a1930fbb861654c9",
"text": "We show that the Winternitz one-time signature scheme is existentially unforgeable under adaptive chosen message attacks when instantiated with a family of pseudorandom functions. Our result halves the signature size at the same security level, compared to previous results, which require a collision resistant hash function. We also consider security in the strong sense and show that the Winternitz one-time signature scheme is strongly unforgeable assuming additional properties of the pseudorandom function family. In this context we formally define several key-based security notions for function families and investigate their relation to pseudorandomness. All our reductions are exact and in the standard model and can directly be used to estimate the output length of the hash function required to meet a certain security level.",
"title": ""
},
{
"docid": "d5a816dd44d4d95b0d281880f1917831",
"text": "In order to plan a safe maneuver, self-driving vehicles need to understand the intent of other traffic participants. We define intent as a combination of discrete high level behaviors as well as continuous trajectories describing future motion. In this paper we develop a one-stage detector and forecaster that exploits both 3D point clouds produced by a LiDAR sensor as well as dynamic maps of the environment. Our multi-task model achieves better accuracy than the respective separate modules while saving computation, which is critical to reduce reaction time in self-driving applications.",
"title": ""
},
{
"docid": "ff0d27f1ba24321dedfc01cee017a23a",
"text": "In Mexico, local empirical knowledge about medicinal properties of plants is the basis for their use as home remedies. It is generally accepted by many people in Mexico and elsewhere in the world that beneficial medicinal effects can be obtained by ingesting plant products. In this review, we focus on the potential pharmacologic bases for herbal plant efficacy, but we also raise concerns about the safety of these agents, which have not been fully assessed. Although numerous randomized clinical trials of herbal medicines have been published and systematic reviews and meta-analyses of these studies are available, generalizations about the efficacy and safety of herbal medicines are clearly not possible. Recent publications have also highlighted the unintended consequences of herbal product use, including morbidity and mortality. It has been found that many phytochemicals have pharmacokinetic or pharmacodynamic interactions with drugs. The present review is limited to some herbal medicines that are native or cultivated in Mexico and that have significant use. We discuss the cultural uses, phytochemistry, pharmacological, and toxicological properties of the following plant species: nopal (Opuntia ficus), peppermint (Mentha piperita), chaparral (Larrea divaricata), dandlion (Taraxacum officinale), mullein (Verbascum densiflorum), chamomile (Matricaria recutita), nettle or stinging nettle (Urtica dioica), passionflower (Passiflora incarnata), linden flower (Tilia europea), and aloe (Aloe vera). We conclude that our knowledge of the therapeutic benefits and risks of some herbal medicines used in Mexico is still limited and efforts to elucidate them should be intensified.",
"title": ""
}
] | scidocsrr |
b5a2a8306f9669a92d6e618327d63bf0 | Adversarial Distillation of Bayesian Neural Network Posteriors | [
{
"docid": "3bad6f7bf3680d33eca19f924fa9084a",
"text": "Deep Learning models are vulnerable to adversarial examples, i.e. images obtained via deliberate imperceptible perturbations, such that the model misclassifies them with high confidence. However, class confidence by itself is an incomplete picture of uncertainty. We therefore use principled Bayesian methods to capture model uncertainty in prediction for observing adversarial misclassification. We provide an extensive study with different Bayesian neural networks attacked in both white-box and black-box setups. The behaviour of the networks for noise, attacks and clean test data is compared. We observe that Bayesian neural networks are uncertain in their predictions for adversarial perturbations, a behaviour similar to the one observed for random Gaussian perturbations. Thus, we conclude that Bayesian neural networks can be considered for detecting adversarial examples.",
"title": ""
},
{
"docid": "d5c67b93732fbf1f572b9b35a58d425e",
"text": "We consider the two related problems of detecting if an example is misclassified or out-of-distribution. We present a simple baseline that utilizes probabilities from softmax distributions. Correctly classified examples tend to have greater maximum softmax probabilities than erroneously classified and out-of-distribution examples, allowing for their detection. We assess performance by defining several tasks in computer vision, natural language processing, and automatic speech recognition, showing the effectiveness of this baseline across all. We then show the baseline can sometimes be surpassed, demonstrating the room for future research on these underexplored detection tasks.",
"title": ""
},
{
"docid": "b12bae586bc49a12cebf11cca49c0386",
"text": "Deep neural networks (DNNs) are powerful nonlinear architectures that are known to be robust to random perturbations of the input. However, these models are vulnerable to adversarial perturbations—small input changes crafted explicitly to fool the model. In this paper, we ask whether a DNN can distinguish adversarial samples from their normal and noisy counterparts. We investigate model confidence on adversarial samples by looking at Bayesian uncertainty estimates, available in dropout neural networks, and by performing density estimation in the subspace of deep features learned by the model. The result is a method for implicit adversarial detection that is oblivious to the attack algorithm. We evaluate this method on a variety of standard datasets including MNIST and CIFAR-10 and show that it generalizes well across different architectures and attacks. Our findings report that 85-93% ROC-AUC can be achieved on a number of standard classification tasks with a negative class that consists of both normal and noisy samples.",
"title": ""
},
{
"docid": "11a69c06f21e505b3e05384536108325",
"text": "Most existing machine learning classifiers are highly vulnerable to adversarial examples. An adversarial example is a sample of input data which has been modified very slightly in a way that is intended to cause a machine learning classifier to misclassify it. In many cases, these modifications can be so subtle that a human observer does not even notice the modification at all, yet the classifier still makes a mistake. Adversarial examples pose security concerns because they could be used to perform an attack on machine learning systems, even if the adversary has no access to the underlying model. Up to now, all previous work have assumed a threat model in which the adversary can feed data directly into the machine learning classifier. This is not always the case for systems operating in the physical world, for example those which are using signals from cameras and other sensors as an input. This paper shows that even in such physical world scenarios, machine learning systems are vulnerable to adversarial examples. We demonstrate this by feeding adversarial images obtained from cell-phone camera to an ImageNet Inception classifier and measuring the classification accuracy of the system. We find that a large fraction of adversarial examples are classified incorrectly even when perceived through the camera.",
"title": ""
}
] | [
{
"docid": "69bb52e45db91f142b8c5297abd21282",
"text": "IP-based solutions to accommodate mobile hosts within existing internetworks do not address the distinctive features of wireless mobile computing. IP-based transport protocols thus suffer from poor performance when a mobile host communicates with a host on the fixed network. This is caused by frequent disruptions in network layer connectivity due to — i) mobility and ii) unreliable nature of the wireless link. We describe the design and implementation of I-TCP, which is an indirect transport layer protocol for mobile hosts. I-TCP utilizes the resources of Mobility Support Routers (MSRs) to provide transport layer communication between mobile hosts and hosts on the fixed network. With I-TCP, the problems related to mobility and the unreliability of wireless link are handled entirely within the wireless link; the TCP/IP software on the fixed hosts is not modified. Using I-TCP on our testbed, the throughput between a fixed host and a mobile host improved substantially in comparison to regular TCP.",
"title": ""
},
{
"docid": "e202a32d88a315419eba627ed336a881",
"text": "Innovation is defined as the development and implementation of new ^^eas by people who over time engage in transactions with others within an institutional order. Thxs defmUion focuses on four basic factors (new ideas, people, transactions, and ms itut.onal context)^An understanding of how these factors are related leads to four basic problems confronting most general managers: (1) a human problem of managing attention, (2) a process probleni in manlgng new ideas into good currency, (3) a structural problem of managing part-whole TelatLnships, and (4) a strategic problem of institutional leadership. This paper discusses thes four basic problems and concludes by suggesting how they fit together into an overall framework to guide longitudinal study of the management of innovation. (ORGANIZATIONAL EFFECTIVENESS; INNOVATION)",
"title": ""
},
{
"docid": "c91196dcb309b9c706a1de8b2a879d0f",
"text": "The goal of process design is the construction of a process model that is a priori optimal w.r.t. the goal(s) of the business owning the process. Process design is therefore a major factor in determining the process performance and ultimately the success of a business. Despite this importance, the designed process is often less than optimal. This is due to two major challenges: First, since the design is an a priori ability, no actual execution data is available to provide the foundations for design decisions. Second, since modeling decision support is typically basic at best, the quality of the design largely depends on the ability of business analysts to make the ”right” design choices. To address these challenges, we present in this paper our deep Business Optimization Platform that enables (semi-) automated process optimization during process design based on actual execution data. Our platform achieves this task by matching new processes to existing processes stored in a repository based on similarity metrics and by using a set of formalized best-practice process optimization patterns.",
"title": ""
},
{
"docid": "ec37a20ce084cf471838dc9e2fa55c9f",
"text": "Recently, deep learning has gained prominence due to the potential it portends for machine learning. For this reason, deep learning techniques have been applied in many fields, such as recognizing some kinds of patterns or classification. Intrusion detection analyses got data from monitoring security events to get situation assessment of network. Lots of traditional machine learning method has been put forward to intrusion detection, but it is necessary to improvement the detection performance and accuracy. This paper discusses different methods which were used to classify network traffic. We decided to use different methods on open data set and did experiment with these methods to find out a best way to intrusion detection.",
"title": ""
},
{
"docid": "e237320556387e6b83affc1ae091f154",
"text": "Considering the difficult technical and sociological issues affecting the regulation of artificial intelligence research and applications.",
"title": ""
},
{
"docid": "59dd112faf8b485e91f70b713d1eee29",
"text": "Background. Imperforate hymen is usually treated with hymenotomy, and the management after its spontaneous rupture is not very well known. Case. In this paper, we present spontaneous rupture of the imperforate hymen in a 13-year-old adolescent girl with hematocolpometra just before a planned hymenotomy operation. The patient was managed conservatively with a satisfactory outcome. Conclusion. Hymenotomy may not be needed in cases with spontaneous rupture of the imperforate hymen if adequate opening for menstrual discharge is warranted.",
"title": ""
},
{
"docid": "3f1a2efdff6be4df064f3f5b978febee",
"text": "D-galactose injection has been shown to induce many changes in mice that represent accelerated aging. This mouse model has been widely used for pharmacological studies of anti-aging agents. The underlying mechanism of D-galactose induced aging remains unclear, however, it appears to relate to glucose and 1ipid metabolic disorders. Currently, there has yet to be a study that focuses on investigating gene expression changes in D-galactose aging mice. In this study, integrated analysis of gas chromatography/mass spectrometry-based metabonomics and gene expression profiles was used to investigate the changes in transcriptional and metabolic profiles in mimetic aging mice injected with D-galactose. Our findings demonstrated that 48 mRNAs were differentially expressed between control and D-galactose mice, and 51 potential biomarkers were identified at the metabolic level. The effects of D-galactose on aging could be attributed to glucose and 1ipid metabolic disorders, oxidative damage, accumulation of advanced glycation end products (AGEs), reduction in abnormal substance elimination, cell apoptosis, and insulin resistance.",
"title": ""
},
{
"docid": "6d7188bd9d7a9a6c80c573d6184d467d",
"text": "Background: Feedback of the weak areas of knowledge in RPD using continuous competency or other test forms is very essential to develop the student knowledge and the syllabus as well. This act should be a regular practice. Aim: To use the outcome of competency test and the objectives structured clinical examination of removable partial denture as a reliable measure to provide a continuous feedback to the teaching system. Method: This sectional study was performed on sixty eight, fifth year students for the period from 2009 to 2010. The experiment was divided into two parts: continuous assessment and the final examination. In the first essay; some basic removable partial denture knowledge, surveying technique, and designing of the metal framework were used to estimate the learning outcome. While in the second essay, some components of the objectives structured clinical examination were compared to the competency test to see the difference in learning outcome. Results: The students’ performance was improved in the final assessment just in some aspects of removable partial denture. However, for the surveying, the students faced some problems. Conclusion: the continuous and final tests can provide a simple tool to advice the teachers for more effective teaching of the RPD. So that the weakness in specific aspects of the RPD syllabus can be detected and corrected continuously from the beginning, during and at the end of the course.",
"title": ""
},
{
"docid": "08d9b5af2c9d8095bf6a6b3453c89f40",
"text": "Alzheimer's disease (AD) is a neurodegenerative disorder associated with loss of memory and cognitive abilities. Previous evidence suggested that exercise ameliorates learning and memory deficits by increasing brain derived neurotrophic factor (BDNF) and activating downstream pathways in AD animal models. However, upstream pathways related to increase BDNF induced by exercise in AD animal models are not well known. We investigated the effects of moderate treadmill exercise on Aβ-induced learning and memory impairment as well as the upstream pathway responsible for increasing hippocampal BDNF in an animal model of AD. Animals were divided into five groups: Intact, Sham, Aβ1-42, Sham-exercise (Sham-exe) and Aβ1-42-exercise (Aβ-exe). Aβ was microinjected into the CA1 area of the hippocampus and then animals in the exercise groups were subjected to moderate treadmill exercise (for 4 weeks with 5 sessions per week) 7 days after microinjection. In the present study the Morris water maze (MWM) test was used to assess spatial learning and memory. Hippocampal mRNA levels of BDNF, peroxisome proliferator-activated receptor gamma co-activator 1 alpha (PGC-1α), fibronectin type III domain-containing 5 (FNDC5) as well as protein levels of AMPK-activated protein kinase (AMPK), PGC-1α, BDNF, phosphorylation of AMPK were measured. Our results showed that intra-hippocampal injection of Aβ1-42 impaired spatial learning and memory which was accompanied by reduced AMPK activity (p-AMPK/total-AMPK ratio) and suppression of the PGC-1α/FNDC5/BDNF pathway in the hippocampus of rats. In contrast, moderate treadmill exercise ameliorated the Aβ1-42-induced spatial learning and memory deficit, which was accompanied by restored AMPK activity and PGC-1α/FNDC5/BDNF levels. Our results suggest that the increased AMPK activity and up-regulation of the PGC-1α/FNDC5/BDNF pathway by exercise are likely involved in mediating the beneficial effects of exercise on Aβ-induced learning and memory impairment.",
"title": ""
},
{
"docid": "957e103d533b3013e24aebd3617edd87",
"text": "The recent success of deep neural networks relies on massive amounts of labeled data. For a target task where labeled data is unavailable, domain adaptation can transfer a learner from a different source domain. In this paper, we propose a new approach to domain adaptation in deep networks that can jointly learn adaptive classifiers and transferable features from labeled data in the source domain and unlabeled data in the target domain. We relax a shared-classifier assumption made by previous methods and assume that the source classifier and target classifier differ by a residual function. We enable classifier adaptation by plugging several layers into deep network to explicitly learn the residual function with reference to the target classifier. We fuse features of multiple layers with tensor product and embed them into reproducing kernel Hilbert spaces to match distributions for feature adaptation. The adaptation can be achieved in most feed-forward models by extending them with new residual layers and loss functions, which can be trained efficiently via back-propagation. Empirical evidence shows that the new approach outperforms state of the art methods on standard domain adaptation benchmarks.",
"title": ""
},
{
"docid": "6a59641369fefcb7c7a917718f1d067c",
"text": "This paper presents an adaptive fuzzy sliding-mode dynamic controller (AFSMDC) of the car-like mobile robot (CLMR) for the trajectory tracking issue. First, a kinematics model of the nonholonomic CLMR is introduced. Then, according to the Lagrange formula, a dynamic model of the CLMR is created. For a real time trajectory tracking problem, an optimal controller capable of effectively driving the CLMR to track the desired trajectory is necessary. Therefore, an AFSMDC is proposed to accomplish the tracking task and to reduce the effect of the external disturbances and system uncertainties of the CLMR. The proposed controller could reduce the tracking errors between the output of the velocity controller and the real velocity of the CLMR. Therefore, the CLMR could track the desired trajectory without posture and orientation errors. Additionally, the stability of the proposed controller is proven by utilizing the Lyapunov stability theory. Finally, the simulation results validate the effectiveness of the proposed AFSMDC.",
"title": ""
},
{
"docid": "967f1e68847111ecf96d964422bea913",
"text": "Text preprocessing is an essential stage in text categorization (TC) particularly and text mining generally. Morphological tools can be used in text preprocessing to reduce multiple forms of the word to one form. There has been a debate among researchers about the benefits of using morphological tools in TC. Studies in the English language illustrated that performing stemming during the preprocessing stage degrades the performance slightly. However, they have a great impact on reducing the memory requirement and storage resources needed. The effect of the preprocessing tools on Arabic text categorization is an area of research. This work provides an evaluation study of several morphological tools for Arabic Text Categorization. The study includes using the raw text, the stemmed text, and the root text. The stemmed and root text are obtained using two different preprocessing tools. The results illustrated that using light stemmer combined with a good performing feature selection method enhances the performance of Arabic Text Categorization especially for small threshold values.",
"title": ""
},
{
"docid": "3613dd18a4c930a28ed520192f7ac23f",
"text": "OBJECTIVES\nIn this paper we present a contemporary understanding of \"nursing informatics\" and relate it to applications in three specific contexts, hospitals, community health, and home dwelling, to illustrate achievements that contribute to the overall schema of health informatics.\n\n\nMETHODS\nWe identified literature through database searches in MEDLINE, EMBASE, CINAHL, and the Cochrane Library. Database searching was complemented by one author search and hand searches in six relevant journals. The literature review helped in conceptual clarification and elaborate on use that are supported by applications in different settings.\n\n\nRESULTS\nConceptual clarification of nursing data, information and knowledge has been expanded to include wisdom. Information systems and support for nursing practice benefits from conceptual clarification of nursing data, information, knowledge, and wisdom. We introduce three examples of information systems and point out core issues for information integration and practice development.\n\n\nCONCLUSIONS\nExploring interplays of data, information, knowledge, and wisdom, nursing informatics takes a practice turn, accommodating to processes of application design and deployment for purposeful use by nurses in different settings. Collaborative efforts will be key to further achievements that support task shifting, mobility, and ubiquitous health care.",
"title": ""
},
{
"docid": "457ea53f0a303e8eba8847422ef61e5a",
"text": "Tele-operated hydraulic underwater manipulators are commonly used to perform remote underwater intervention tasks such as weld inspection or mating of connectors. Automation of these tasks to use tele-assistance requires a suitable hybrid position/force control scheme, to specify simultaneously the robot motion and contact forces. Classical linear control does not allow for the highly non-linear and time varying robot dynamics in this situation. Adequate control performance requires more advanced controllers. This paper presents and compares two different advanced hybrid control algorithms. The first is based on a modified Variable Structure Control (VSC-HF) with a virtual environment, and the second uses a multivariable self-tuning adaptive controller. A direct comparison of the two proposed control schemes is performed in simulation, using a model of the dynamics of a hydraulic underwater manipulator (a Slingsby TA9) in contact with a surface. These comparisons look at the performance of the controllers under a wide variety of operating conditions, including different environment stiffnesses, positions of the robot and",
"title": ""
},
{
"docid": "6046c04b170c68476affb306841c5043",
"text": "Innovative ship design projects often require an extensive concept design phase to allow a wide range of potential solutions to be investigated, identifying which best suits the requirements. In these situations, the majority of ship design tools do not provide the best solution, limiting quick reconfiguration by focusing on detailed definition only. Parametric design, including generation of the hull surface, can model topology as well as geometry offering advantages often not exploited. Paramarine is an integrated ship design environment that is based on an objectorientated framework which allows the parametric connection of all aspects of both the product model and analysis together. Design configuration is managed to ensure that relationships within the model are topologically correct and kept up to date. While this offers great flexibility, concept investigation is streamlined by the Early Stage Design module, based on the (University College London) Functional Building Block methodology, collating design requirements, product model definition and analysis together to establish the form, function and layout of the design. By bringing this information together, the complete design requirements for the hull surface itself are established and provide the opportunity for parametric hull form generation techniques to have a fully integrated role in the concept design process. This paper explores several different hull form generation techniques which have been combined with the Early Stage Design module to demonstrate the capability of this design partnership.",
"title": ""
},
{
"docid": "d15ce9f62f88a07db6fa427fae61f26c",
"text": "This paper introduced a detail ElGamal digital signature scheme, and mainly analyzed the existing problems of the ElGamal digital signature scheme. Then improved the scheme according to the existing problems of ElGamal digital signature scheme, and proposed an implicit ElGamal type digital signature scheme with the function of message recovery. As for the problem that message recovery not being allowed by ElGamal signature scheme, this article approached a method to recover message. This method will make ElGamal signature scheme have the function of message recovery. On this basis, against that part of signature was used on most attacks for ElGamal signature scheme, a new implicit signature scheme with the function of message recovery was formed, after having tried to hid part of signature message and refining forthcoming implicit type signature scheme. The safety of the refined scheme was anlyzed, and its results indicated that the new scheme was better than the old one.",
"title": ""
},
{
"docid": "15195baf3ec186887e4c5ee5d041a5a6",
"text": "We show that generating English Wikipedia articles can be approached as a multidocument summarization of source documents. We use extractive summarization to coarsely identify salient information and a neural abstractive model to generate the article. For the abstractive model, we introduce a decoder-only architecture that can scalably attend to very long sequences, much longer than typical encoderdecoder architectures used in sequence transduction. We show that this model can generate fluent, coherent multi-sentence paragraphs and even whole Wikipedia articles. When given reference documents, we show it can extract relevant factual information as reflected in perplexity, ROUGE scores and human evaluations.",
"title": ""
},
{
"docid": "e4347c1b3df0bf821f552ef86a17a8c8",
"text": "Volumetric lesion segmentation via medical imaging is a powerful means to precisely assess multiple time-point lesion/tumor changes. Because manual 3D segmentation is prohibitively time consuming and requires radiological experience, current practices rely on an imprecise surrogate called response evaluation criteria in solid tumors (RECIST). Despite their coarseness, RECIST marks are commonly found in current hospital picture and archiving systems (PACS), meaning they can provide a potentially powerful, yet extraordinarily challenging, source of weak supervision for full 3D segmentation. Toward this end, we introduce a convolutional neural network based weakly supervised self-paced segmentation (WSSS) method to 1) generate the initial lesion segmentation on the axial RECISTslice; 2) learn the data distribution on RECIST-slices; 3) adapt to segment the whole volume slice by slice to finally obtain a volumetric segmentation. In addition, we explore how super-resolution images (2 ∼ 5 times beyond the physical CT imaging), generated from a proposed stacked generative adversarial network, can aid the WSSS performance. We employ the DeepLesion dataset, a comprehensive CTimage lesion dataset of 32, 735 PACS-bookmarked findings, which include lesions, tumors, and lymph nodes of varying sizes, categories, body regions and surrounding contexts. These are drawn from 10, 594 studies of 4, 459 patients. We also validate on a lymph-node dataset, where 3D ground truth masks are available for all images. For the DeepLesion dataset, we report mean Dice coefficients of 93% on RECIST-slices and 76% in 3D lesion volumes. We further validate using a subjective user study, where an experienced ∗Indicates equal contribution. †This work is done during Jinzheng Cai’s internship at National Institutes of Health. Le Lu is now with Nvidia Corp ([email protected]). CN N Initial 2D Segmentation Self-Paced 3D Segmentation CN N CN N CN N Image Image",
"title": ""
}
] | scidocsrr |
a48a2385c64de73ec6837650edccc60c | Privacy Preserving Social Network Data Publication | [
{
"docid": "6fa6ce80c183cf9b36e56011490c0504",
"text": "Lipschitz extensions were recently proposed as a tool for designing node differentially private algorithms. However, efficiently computable Lipschitz extensions were known only for 1-dimensional functions (that is, functions that output a single real value). In this paper, we study efficiently computable Lipschitz extensions for multi-dimensional (that is, vector-valued) functions on graphs. We show that, unlike for 1-dimensional functions, Lipschitz extensions of higher-dimensional functions on graphs do not always exist, even with a non-unit stretch. We design Lipschitz extensions with small stretch for the sorted degree list and for the degree distribution of a graph. Crucially, our extensions are efficiently computable. We also develop new tools for employing Lipschitz extensions in the design of differentially private algorithms. Specifically, we generalize the exponential mechanism, a widely used tool in data privacy. The exponential mechanism is given a collection of score functions that map datasets to real values. It attempts to return the name of the function with nearly minimum value on the data set. Our generalized exponential mechanism provides better accuracy when the sensitivity of an optimal score function is much smaller than the maximum sensitivity of score functions. We use our Lipschitz extension and the generalized exponential mechanism to design a nodedifferentially private algorithm for releasing an approximation to the degree distribution of a graph. Our algorithm is much more accurate than algorithms from previous work. ∗Computer Science and Engineering Department, Pennsylvania State University. {asmith,sofya}@cse.psu.edu. Supported by NSF awards CDI-0941553 and IIS-1447700 and a Google Faculty Award. Part of this work was done while visiting Boston University’s Hariri Institute for Computation. 1 ar X iv :1 50 4. 07 91 2v 1 [ cs .C R ] 2 9 A pr 2 01 5",
"title": ""
}
] | [
{
"docid": "5c90f5a934a4d936257467a14a058925",
"text": "We present a new autoencoder-type architecture that is trainable in an unsupervised mode, sustains both generation and inference, and has the quality of conditional and unconditional samples boosted by adversarial learning. Unlike previous hybrids of autoencoders and adversarial networks, the adversarial game in our approach is set up directly between the encoder and the generator, and no external mappings are trained in the process of learning. The game objective compares the divergences of each of the real and the generated data distributions with the prior distribution in the latent space. We show that direct generator-vs-encoder game leads to a tight coupling of the two components, resulting in samples and reconstructions of a comparable quality to some recently-proposed more complex",
"title": ""
},
{
"docid": "19fe8c6452dd827ffdd6b4c6e28bc875",
"text": "Motivation for the investigation of position and waypoint controllers is the demand for Unattended Aerial Systems (UAS) capable of fulfilling e.g. surveillance tasks in contaminated or in inaccessible areas. Hence, this paper deals with the development of a 2D GPS-based position control system for 4 Rotor Helicopters able to keep positions above given destinations as well as to navigate between waypoints while minimizing trajectory errors. Additionally, the novel control system enables permanent full speed flight with reliable altitude keeping considering that the resulting lift is decreasing while changing pitch or roll angles for position control. In the following chapters the control procedure for position control and waypoint navigation is described. The dynamic behavior was simulated by means of Matlab/Simulink and results are shown. Further, the control strategies were implemented on a flight demonstrator for validation, experimental results are provided and a comparison is discussed.",
"title": ""
},
{
"docid": "ff93e77bb0e0b24a06780a05cc16123d",
"text": "Models in science may be used for various purposes: organizing data, synthesizing information, and making predictions. However, the value of model predictions is undermined by their uncertainty, which arises primarily from the fact that our models of complex natural systems are always open. Models can never fully specify the systems that they describe, and therefore their predictions are always subject to uncertainties that we cannot fully specify. Moreover, the attempt to make models capture the complexities of natural systems leads to a paradox: the more we strive for realism by incorporating as many as possible of the different processes and parameters that we believe to be operating in the system, the more difficult it is for us to know if our tests of the model are meaningful. A complex model may be more realistic, yet it is ironic that as we add more factors to a model, the certainty of its predictions may decrease even as our intuitive faith in the model increases. For this and other reasons, model output should not be viewed as an accurate prediction of the future state of the system. Short timeframe model output can and should be used to evaluate models and suggest avenues for future study. Model output can also generate “what if” scenarios that can help to evaluate alternative courses of action (or inaction), including worst-case and best-case outcomes. But scientists should eschew long-range deterministic predictions, which are likely to be erroneous and may damage the credibility of the communities that generate them.",
"title": ""
},
{
"docid": "d00cdbbe08a56952685118e68c0b9115",
"text": "s R esum es Canadian Undergraduate Mathematics Conference 1998 | Part 3 The Brachistochrone Problem Nils Johnson The University of British Columbia The brachistochrone problem is to nd the curve between two points down which a bead will slide in the shortest amount of time, neglecting friction and assuming conservation of energy. To solve the problem, an integral is derived that computes the amount of time it would take a bead to slide down a given curve y(x). This integral is minimized over all possible curves and yields the di erential equation y(1 + (y)) = k as a constraint for the minimizing function y(x). Solving this di erential equation shows that a cycloid (the path traced out by a point on the rim of a rolling wheel) is the solution to the brachistochrone problem. First proposed in 1696 by Johann Bernoulli, this problem is credited with having led to the development of the calculus of variations. The solution presented assumes knowledge of one-dimensional calculus and elementary di erential equations. The Theory of Error-Correcting Codes Dennis Hill University of Ottawa Coding theory is concerned with the transfer of data. There are two issues of fundamental importance. First, the data must be transferred accurately. But equally important is that the transfer be done in an e cient manner. It is the interplay of these two issues which is the core of the theory of error-correcting codes. Typically, the data is represented as a string of zeros and ones. Then a code consists of a set of such strings, each of the same length. The most fruitful approach to the subject is to consider the set f0; 1g as a two-element eld. We will then only",
"title": ""
},
{
"docid": "476aa14f6b71af480e8ab4747849d7e3",
"text": "The present study explored the relationship between risky cybersecurity behaviours, attitudes towards cybersecurity in a business environment, Internet addiction, and impulsivity. 538 participants in part-time or full-time employment in the UK completed an online questionnaire, with responses from 515 being used in the data analysis. The survey included an attitude towards cybercrime and cybersecurity in business scale, a measure of impulsivity, Internet addiction and a 'risky' cybersecurity behaviours scale. The results demonstrated that Internet addiction was a significant predictor for risky cybersecurity behaviours. A positive attitude towards cybersecurity in business was negatively related to risky cybersecurity behaviours. Finally, the measure of impulsivity revealed that both attentional and motor impulsivity were both significant positive predictors of risky cybersecurity behaviours, with non-planning being a significant negative predictor. The results present a further step in understanding the individual differences that may govern good cybersecurity practices, highlighting the need to focus directly on more effective training and awareness mechanisms.",
"title": ""
},
{
"docid": "2526915745dda9026836347292f79d12",
"text": "I show that a functional representation of self-similarity (as the one occurring in fractals) is provided by squeezed coherent states. In this way, the dissipative model of brain is shown to account for the self-similarity in brain background activity suggested by power-law distributions of power spectral densities of electrocorticograms. I also briefly discuss the action-perception cycle in the dissipative model with reference to intentionality in terms of trajectories in the memory state space.",
"title": ""
},
{
"docid": "f095118c63d1531ebdbaec3565b0d91f",
"text": "BACKGROUND\nSystematic reviews are most helpful if they are up-to-date. We did a systematic review of strategies and methods describing when and how to update systematic reviews.\n\n\nOBJECTIVES\nTo identify, describe and assess strategies and methods addressing: 1) when to update systematic reviews and 2) how to update systematic reviews.\n\n\nSEARCH STRATEGY\nWe searched MEDLINE (1966 to December 2005), PsycINFO, the Cochrane Methodology Register (Issue 1, 2006), and hand searched the 2005 Cochrane Colloquium proceedings.\n\n\nSELECTION CRITERIA\nWe included methodology reports, updated systematic reviews, commentaries, editorials, or other short reports describing the development, use, or comparison of strategies and methods for determining the need for updating or updating systematic reviews in healthcare.\n\n\nDATA COLLECTION AND ANALYSIS\nWe abstracted information from each included report using a 15-item questionnaire. The strategies and methods for updating systematic reviews were assessed and compared descriptively with respect to their usefulness, comprehensiveness, advantages, and disadvantages.\n\n\nMAIN RESULTS\nFour updating strategies, one technique, and two statistical methods were identified. Three strategies addressed steps for updating and one strategy presented a model for assessing the need to update. One technique discussed the use of the \"entry date\" field in bibliographic searching. Statistical methods were cumulative meta-analysis and predicting when meta-analyses are outdated.\n\n\nAUTHORS' CONCLUSIONS\nLittle research has been conducted on when and how to update systematic reviews and the feasibility and efficiency of the identified approaches is uncertain. These shortcomings should be addressed in future research.",
"title": ""
},
{
"docid": "940e3a77d9dbe1da2fb2f38ae768b71e",
"text": "Layer-by-layer deposition of materials to manufacture parts—better known as three-dimensional (3D) printing or additive manufacturing—has been flourishing as a fabrication process in the past several years and now can create complex geometries for use as models, assembly fixtures, and production molds. Increasing interest has focused on the use of this technology for direct manufacturing of production parts; however, it remains generally limited to single-material fabrication, which can limit the end-use functionality of the fabricated structures. The next generation of 3D printing will entail not only the integration of dissimilar materials but the embedding of active components in order to deliver functionality that was not possible previously. Examples could include arbitrarily shaped electronics with integrated microfluidic thermal management and intelligent prostheses custom-fit to the anatomy of a specific patient. We review the state of the art in multiprocess (or hybrid) 3D printing, in which complementary processes, both novel and traditional, are combined to advance the future of manufacturing.",
"title": ""
},
{
"docid": "9a3a73f35b27d751f237365cc34c8b28",
"text": "The development of brain metastases in patients with advanced stage melanoma is common, but the molecular mechanisms responsible for their development are poorly understood. Melanoma brain metastases cause significant morbidity and mortality and confer a poor prognosis; traditional therapies including whole brain radiation, stereotactic radiotherapy, or chemotherapy yield only modest increases in overall survival (OS) for these patients. While recently approved therapies have significantly improved OS in melanoma patients, only a small number of studies have investigated their efficacy in patients with brain metastases. Preliminary data suggest that some responses have been observed in intracranial lesions, which has sparked new clinical trials designed to evaluate the efficacy in melanoma patients with brain metastases. Simultaneously, recent advances in our understanding of the mechanisms of melanoma cell dissemination to the brain have revealed novel and potentially therapeutic targets. In this review, we provide an overview of newly discovered mechanisms of melanoma spread to the brain, discuss preclinical models that are being used to further our understanding of this deadly disease and provide an update of the current clinical trials for melanoma patients with brain metastases.",
"title": ""
},
{
"docid": "05127dab049ef7608932913f66db0990",
"text": "This paper presents a hybrid tele-manipulation system, comprising of a sensorized 3-D-printed soft robotic gripper and a soft fabric-based haptic glove that aim at improving grasping manipulation and providing sensing feedback to the operators. The flexible 3-D-printed soft robotic gripper broadens what a robotic gripper can do, especially for grasping tasks where delicate objects, such as glassware, are involved. It consists of four pneumatic finger actuators, casings with through hole for housing the actuators, and adjustable base. The grasping length and width can be configured easily to suit a variety of objects. The soft haptic glove is equipped with flex sensors and soft pneumatic haptic actuator, which enables the users to control the grasping, to determine whether the grasp is successful, and to identify the grasped object shape. The fabric-based soft pneumatic haptic actuator can simulate haptic perception by producing force feedback to the users. Both the soft pneumatic finger actuator and haptic actuator involve simple fabrication technique, namely 3-D-printed approach and fabric-based approach, respectively, which reduce fabrication complexity as compared to the steps involved in a traditional silicone-based approach. The sensorized soft robotic gripper is capable of picking up and holding a wide variety of objects in this study, ranging from lightweight delicate object weighing less than 50 g to objects weighing 1100 g. The soft haptic actuator can produce forces of up to 2.1 N, which is more than the minimum force of 1.5 N needed to stimulate haptic perception. The subjects are able to differentiate the two objects with significant shape differences in the pilot test. Compared to the existing soft grippers, this is the first soft sensorized 3-D-printed gripper, coupled with a soft fabric-based haptic glove that has the potential to improve the robotic grasping manipulation by introducing haptic feedback to the users.",
"title": ""
},
{
"docid": "a58769ca02b9409a983ac6d7ba69f0be",
"text": "In this paper, we describe an approach for the automatic medical annotation task of the 2008 CLEF cross-language image retrieval campaign (ImageCLEF). The data comprise 12076 fully annotated images according to the IRMA code. This work is focused on the process of feature extraction from images and hierarchical multi-label classification. To extract features from the images we used a technique called: local distribution of edges. With this techniques each image was described with 80 variables. The goal of the classification task was to classify an image according to the IRMA code. The IRMA code is organized hierarchically. Hence, as classifer we selected an extension of the predictive clustering trees (PCTs) that is able to handle this type of data. Further more, we constructed ensembles (Bagging and Random Forests) that use PCTs as base classifiers.",
"title": ""
},
{
"docid": "adddebf272a3b0fe510ea04ed7cc3837",
"text": "PURPOSE\nTo explore the association of angiographic nonperfusion in focal and diffuse recalcitrant diabetic macular edema (DME) in diabetic retinopathy (DR).\n\n\nDESIGN\nA retrospective, observational case series of patients with the diagnosis of recalcitrant DME for at least 2 years placed into 1 of 4 cohorts based on the degree of DR.\n\n\nMETHODS\nA total of 148 eyes of 76 patients met the inclusion criteria at 1 academic institution. Ultra-widefield fluorescein angiography (FA) images and spectral-domain optical coherence tomography (SD OCT) images were obtained on all patients. Ultra-widefield FA images were graded for quantity of nonperfusion, which was used to calculate ischemic index. Main outcome measures were mean ischemic index, mean change in central macular thickness (CMT), and mean number of macular photocoagulation treatments over the 2-year study period.\n\n\nRESULTS\nThe mean ischemic index was 47% (SD 25%; range 0%-99%). The mean ischemic index of eyes within Cohorts 1, 2, 3, and 4 was 0%, 34% (range 16%-51%), 53% (range 32%-89%), and 65% (range 47%-99%), respectively. The mean percentage decrease in CMT in Cohorts 1, 2, 3, and 4 were 25.2%, 19.1%, 11.6%, and 7.2%, respectively. The mean number of macular photocoagulation treatments in Cohorts 1, 2, 3, and 4 was 2.3, 4.8, 5.3, and 5.7, respectively.\n\n\nCONCLUSIONS\nEyes with larger areas of retinal nonperfusion and greater severity of DR were found to have the most recalcitrant DME, as evidenced by a greater number of macular photocoagulation treatments and less reduction in SD OCT CMT compared with eyes without retinal nonperfusion. Areas of untreated retinal nonperfusion may generate biochemical mediators that promote ischemia and recalcitrant DME.",
"title": ""
},
{
"docid": "d798bc49068356495074f92b3bfe7a4b",
"text": "This study presents an experimental evaluation of neural networks for nonlinear time-series forecasting. The e!ects of three main factors * input nodes, hidden nodes and sample size, are examined through a simulated computer experiment. Results show that neural networks are valuable tools for modeling and forecasting nonlinear time series while traditional linear methods are not as competent for this task. The number of input nodes is much more important than the number of hidden nodes in neural network model building for forecasting. Moreover, large sample is helpful to ease the over\"tting problem.",
"title": ""
},
{
"docid": "77f5216ede8babf4fb3b2bcbfc9a3152",
"text": "Various aspects of the theory of random walks on graphs are surveyed. In particular, estimates on the important parameters of access time, commute time, cover time and mixing time are discussed. Connections with the eigenvalues of graphs and with electrical networks, and the use of these connections in the study of random walks is described. We also sketch recent algorithmic applications of random walks, in particular to the problem of sampling.",
"title": ""
},
{
"docid": "fbe1e6b899b1a2e9d53d25e3fa70bd86",
"text": "Previous empirical studies examining the relationship between IT capability and accountingbased measures of firm performance report mixed results. We argue that extant research (1) has relied on aggregate overall measures of the firm’s IT capability, ignoring the specific type and nature of IT capability; and (2) has not fully considered important contextual (environmental) conditions that influence the IT capability-firm performance relationship. Drawing on the resource-based view (RBV), we advance a contingency perspective and propose that IT capabilities’ impact on firm resources is contingent on the “fit” between the type of IT capability/resource a firm possesses and the demands of the environment (industry) in which it competes. Specifically, using publicly available rankings as proxies for two types of IT capabilities (internally-focused and externally-focused capabilities), we empirically examines the degree to which three industry characteristics (dynamism, munificence, and complexity) influence the impact of each type of IT capability on measures of financial performance. After controlling for prior performance, the findings provide general support for the posited contingency model of IT impact. The implications of these findings on practice and research are discussed.",
"title": ""
},
{
"docid": "28b796954834230a0e8218e24bab0d35",
"text": "Oral Squamous Cell Carcinoma (OSCC) is a common type of cancer of the oral epithelium. Despite their high impact on mortality, sufficient screening methods for early diagnosis of OSCC often lack accuracy and thus OSCCs are mostly diagnosed at a late stage. Early detection and accurate outline estimation of OSCCs would lead to a better curative outcome and a reduction in recurrence rates after surgical treatment. Confocal Laser Endomicroscopy (CLE) records sub-surface micro-anatomical images for in vivo cell structure analysis. Recent CLE studies showed great prospects for a reliable, real-time ultrastructural imaging of OSCC in situ. We present and evaluate a novel automatic approach for OSCC diagnosis using deep learning technologies on CLE images. The method is compared against textural feature-based machine learning approaches that represent the current state of the art. For this work, CLE image sequences (7894 images) from patients diagnosed with OSCC were obtained from 4 specific locations in the oral cavity, including the OSCC lesion. The present approach is found to outperform the state of the art in CLE image recognition with an area under the curve (AUC) of 0.96 and a mean accuracy of 88.3% (sensitivity 86.6%, specificity 90%).",
"title": ""
},
{
"docid": "be48b00ee50c872d42ab95e193ac774b",
"text": "T profitability of remanufacturing systems for different cost, technology, and logistics structures has been extensively investigated in the literature. We provide an alternative and somewhat complementary approach that considers demand-related issues, such as the existence of green segments, original equipment manufacturer competition, and product life-cycle effects. The profitability of a remanufacturing system strongly depends on these issues as well as on their interactions. For a monopolist, we show that there exist thresholds on the remanufacturing cost savings, the green segment size, market growth rate, and consumer valuations for the remanufactured products, above which remanufacturing is profitable. More important, we show that under competition remanufacturing can become an effective marketing strategy, which allows the manufacturer to defend its market share via price discrimination.",
"title": ""
},
{
"docid": "37c35b782bb80d2324749fc71089c445",
"text": "Predicting the stock market is considered to be a very difficult task due to its non-linear and dynamic nature. Our proposed system is designed in such a way that even a layman can use it. It reduces the burden on the user. The user’s job is to give only the recent closing prices of a stock as input and the proposed Recommender system will instruct him when to buy and when to sell if it is profitable or not to buy share in case if it is not profitable to do trading. Using soft computing based techniques is considered to be more suitable for predicting trends in stock market where the data is chaotic and large in number. The soft computing based systems are capable of extracting relevant information from large sets of data by discovering hidden patterns in the data. Here regression trees are used for dimensionality reduction and clustering is done with the help of Self Organizing Maps (SOM). The proposed system is designed to assist stock market investors identify possible profit-making opportunities and also help in developing a better understanding on how to extract the relevant information from stock price data. © 2016 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "a84b5fa43c17eebd9cc3ddf2a0d2129e",
"text": "The lack of realistic and open benchmarking datasets for pedestrian visual-inertial odometry has made it hard to pinpoint differences in published methods. Existing datasets either lack a full six degree-of-freedom ground-truth or are limited to small spaces with optical tracking systems. We take advantage of advances in pure inertial navigation, and develop a set of versatile and challenging real-world computer vision benchmark sets for visual-inertial odometry. For this purpose, we have built a test rig equipped with an iPhone, a Google Pixel Android phone, and a Google Tango device. We provide a wide range of raw sensor data that is accessible on almost any modern-day smartphone together with a high-quality ground-truth track. We also compare resulting visual-inertial tracks from Google Tango, ARCore, and Apple ARKit with two recent methods published in academic forums. The data sets cover both indoor and outdoor cases, with stairs, escalators, elevators, office environments, a shopping mall, and metro station.",
"title": ""
},
{
"docid": "80477fdab96ae761dbbb7662b87e82a0",
"text": "This article provides minimum requirements for having confidence in the accuracy of EC50/IC50 estimates. Two definitions of EC50/IC50s are considered: relative and absolute. The relative EC50/IC50 is the parameter c in the 4-parameter logistic model and is the concentration corresponding to a response midway between the estimates of the lower and upper plateaus. The absolute EC50/IC50 is the response corresponding to the 50% control (the mean of the 0% and 100% assay controls). The guidelines first describe how to decide whether to use the relative EC50/IC50 or the absolute EC50/IC50. Assays for which there is no stable 100% control must use the relative EC50/IC50. Assays having a stable 100% control but for which there may be more than 5% error in the estimate of the 50% control mean should use the relative EC50/IC50. Assays that can be demonstrated to produce an accurate and stable 100% control and less than 5% error in the estimate of the 50% control mean may gain efficiency as well as accuracy by using the absolute EC50/IC50. Next, the guidelines provide rules for deciding when the EC50/IC50 estimates are reportable. The relative EC50/IC50 should only be used if there are at least two assay concentrations beyond the lower and upper bend points. The absolute EC50/IC50 should only be used if there are at least two assay concentrations whose predicted response is less than 50% and two whose predicted response is greater than 50%. A wide range of typical assay conditions are considered in the development of the guidelines.",
"title": ""
}
] | scidocsrr |
d3bca3025b5f26f3428a448435e5eab1 | Upsampling range data in dynamic environments | [
{
"docid": "67e16f36bb6d83c5d6eae959a7223b77",
"text": "Neighborhood filters are nonlocal image and movie filters which reduce the noise by averaging similar pixels. The first object of the paper is to present a unified theory of these filters and reliable criteria to compare them to other filter classes. A CCD noise model will be presented justifying the involvement of neighborhood filters. A classification of neighborhood filters will be proposed, including classical image and movie denoising methods and discussing further a recently introduced neighborhood filter, NL-means. In order to compare denoising methods three principles will be discussed. The first principle, “method noise”, specifies that only noise must be removed from an image. A second principle will be introduced, “noise to noise”, according to which a denoising method must transform a white noise into a white noise. Contrarily to “method noise”, this principle, which characterizes artifact-free methods, eliminates any subjectivity and can be checked by mathematical arguments and Fourier analysis. “Noise to noise” will be proven to rule out most denoising methods, with the exception of neighborhood filters. This is why a third and new comparison principle, the “statistical optimality”, is needed and will be introduced to compare the performance of all neighborhood filters. The three principles will be applied to compare ten different image and movie denoising methods. It will be first shown that only wavelet thresholding methods and NL-means give an acceptable method noise. Second, that neighborhood filters are the only ones to satisfy the “noise to noise” principle. Third, that among them NL-means is closest to statistical optimality. A particular attention will be paid to the application of the statistical optimality criterion for movie denoising methods. It will be pointed out that current movie denoising methods are motion compensated neighborhood filters. This amounts to say that they are neighborhood filters and that the ideal neighborhood of a pixel is its trajectory. Unfortunately the aperture problem makes it impossible to estimate ground true trajectories. It will be demonstrated that computing trajectories and restricting the neighborhood to them is harmful for denoising purposes and that space-time NL-means preserves more movie details.",
"title": ""
}
] | [
{
"docid": "ce0b0543238a81c3f02c43e63a285605",
"text": "Hatebusters is a web application for actively reporting YouTube hate speech, aiming to establish an online community of volunteer citizens. Hatebusters searches YouTube for videos with potentially hateful comments, scores their comments with a classifier trained on human-annotated data and presents users those comments with the highest probability of being hate speech. It also employs gamification elements, such as achievements and leaderboards, to drive user engagement.",
"title": ""
},
{
"docid": "8dd540b33035904f63c67b57d4c97aa3",
"text": "Wireless local area networks (WLANs) based on the IEEE 802.11 standards are one of today’s fastest growing technologies in businesses, schools, and homes, for good reasons. As WLAN deployments increase, so does the challenge to provide these networks with security. Security risks can originate either due to technical lapse in the security mechanisms or due to defects in software implementations. Standard Bodies and researchers have mainly used UML state machines to address the implementation issues. In this paper we propose the use of GSE methodology to analyse the incompleteness and uncertainties in specifications. The IEEE 802.11i security protocol is used as an example to compare the effectiveness of the GSE and UML models. The GSE methodology was found to be more effective in identifying ambiguities in specifications and inconsistencies between the specification and the state machines. Resolving all issues, we represent the robust security network (RSN) proposed in the IEEE 802.11i standard using different GSE models.",
"title": ""
},
{
"docid": "e3b1e52066d20e7c92e936cdb72cc32b",
"text": "This paper presents a new approach to power system automation, based on distributed intelligence rather than traditional centralized control. The paper investigates the interplay between two international standards, IEC 61850 and IEC 61499, and proposes a way of combining of the application functions of IEC 61850-compliant devices with IEC 61499-compliant “glue logic,” using the communication services of IEC 61850-7-2. The resulting ability to customize control and automation logic will greatly enhance the flexibility and adaptability of automation systems, speeding progress toward the realization of the smart grid concept.",
"title": ""
},
{
"docid": "756b25456494b3ece9b240ba3957f91c",
"text": "In this paper we introduce the task of fact checking, i.e. the assessment of the truthfulness of a claim. The task is commonly performed manually by journalists verifying the claims made by public figures. Furthermore, ordinary citizens need to assess the truthfulness of the increasing volume of statements they consume. Thus, developing fact checking systems is likely to be of use to various members of society. We first define the task and detail the construction of a publicly available dataset using statements fact-checked by journalists available online. Then, we discuss baseline approaches for the task and the challenges that need to be addressed. Finally, we discuss how fact checking relates to mainstream natural language processing tasks and can stimulate further research.",
"title": ""
},
{
"docid": "2639f5d735abed38ed4f7ebf11072087",
"text": "The rising popularity of intelligent mobile devices and the daunting computational cost of deep learning-based models call for efficient and accurate on-device inference schemes. We propose a quantization scheme that allows inference to be carried out using integer-only arithmetic, which can be implemented more efficiently than floating point inference on commonly available integer-only hardware. We also co-design a training procedure to preserve end-to-end model accuracy post quantization. As a result, the proposed quantization scheme improves the tradeoff between accuracy and on-device latency. The improvements are significant even on MobileNets, a model family known for run-time efficiency, and are demonstrated in ImageNet classification and COCO detection on popular CPUs.",
"title": ""
},
{
"docid": "9f005054e640c2db97995c7540fe2034",
"text": "Attack detection is usually approached as a classification problem. However, standard classification tools often perform poorly, because an adaptive attacker can shape his attacks in response to the algorithm. This has led to the recent interest in developing methods for adversarial classification, but to the best of our knowledge, there have been a very few prior studies that take into account the attacker’s tradeoff between adapting to the classifier being used against him with his desire to maintain the efficacy of his attack. Including this effect is a key to derive solutions that perform well in practice. In this investigation, we model the interaction as a game between a defender who chooses a classifier to distinguish between attacks and normal behavior based on a set of observed features and an attacker who chooses his attack features (class 1 data). Normal behavior (class 0 data) is random and exogenous. The attacker’s objective balances the benefit from attacks and the cost of being detected while the defender’s objective balances the benefit of a correct attack detection and the cost of false alarm. We provide an efficient algorithm to compute all Nash equilibria and a compact characterization of the possible forms of a Nash equilibrium that reveals intuitive messages on how to perform classification in the presence of an attacker. We also explore qualitatively and quantitatively the impact of the non-attacker and underlying parameters on the equilibrium strategies.",
"title": ""
},
{
"docid": "f0d55892fb927c5c5324cfb7b8380bda",
"text": "The paper presents application of data mining methods for recognizing the most significant genes and gene sequences (treated as features) stored in a dataset of gene expression microarray. The investigations are performed for autism data. Few chosen methods of feature selection have been applied and their results integrated in the final outcome. In this way we find the contents of small set of the most important genes associated with autism. They have been applied in the classification procedure aimed on recognition of autism from reference group members. The results of numerical experiments concerning selection of the most important genes and classification of the cases on the basis of the selected genes will be discussed. The main contribution of the paper is in developing the fusion system of the results of many selection approaches into the final set, most closely associated with autism. We have also proposed special procedure of estimating the number of highest rank genes used in classification procedure. 2014 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "b3bb84322c28a9d0493d9b8a626666e4",
"text": "Underwater images often suffer from color distortion and low contrast, because light is scattered and absorbed when traveling through water. Such images with different color tones can be shot in various lighting conditions, making restoration and enhancement difficult. We propose a depth estimation method for underwater scenes based on image blurriness and light absorption, which can be used in the image formation model (IFM) to restore and enhance underwater images. Previous IFM-based image restoration methods estimate scene depth based on the dark channel prior or the maximum intensity prior. These are frequently invalidated by the lighting conditions in underwater images, leading to poor restoration results. The proposed method estimates underwater scene depth more accurately. Experimental results on restoring real and synthesized underwater images demonstrate that the proposed method outperforms other IFM-based underwater image restoration methods.",
"title": ""
},
{
"docid": "37d4b01b77e548aa6226774be627471c",
"text": "A fully integrated 8-channel phased-array receiver at 24 GHz is demonstrated. Each channel achieves a gain of 43 dB, noise figure of 8 dB, and an IIP3 of -11dBm, consuming 29 mA of current from a 2.5 V supply. The 8-channel array has a beam-forming resolution of 22.5/spl deg/, a peak-to-null ratio of 20 dB (4-bits), a total array gain of 61 dB, and improves the signal-to-noise ratio by 9 dB.",
"title": ""
},
{
"docid": "0a2e59ab99b9666d8cf3fb31be9fa40c",
"text": "Behavioral targeting (BT) is a widely used technique for online advertising. It leverages information collected on an individual's web-browsing behavior, such as page views, search queries and ad clicks, to select the ads most relevant to user to display. With the proliferation of social networks, it is possible to relate the behavior of individuals and their social connections. Although the similarity among connected individuals are well established (i.e., homophily), it is still not clear whether and how we can leverage the activities of one's friends for behavioral targeting; whether forecasts derived from such social information are more accurate than standard behavioral targeting models. In this paper, we strive to answer these questions by evaluating the predictive power of social data across 60 consumer domains on a large online network of over 180 million users in a period of two and a half months. To our best knowledge, this is the most comprehensive study of social data in the context of behavioral targeting on such an unprecedented scale. Our analysis offers interesting insights into the value of social data for developing the next generation of targeting services.",
"title": ""
},
{
"docid": "461ee7b6a61a6d375a3ea268081f80f5",
"text": "In this paper, we review the background and state-of-the-art of big data. We first introduce the general background of big data and review related technologies, such as could computing, Internet of Things, data centers, and Hadoop. We then focus on the four phases of the value chain of big data, i.e., data generation, data acquisition, data storage, and data analysis. For each phase, we introduce the general background, discuss the technical challenges, and review the latest advances. We finally examine the several representative applications of big data, including enterprise management, Internet of Things, online social networks, medial applications, collective intelligence, and smart grid. These discussions aim to provide a comprehensive overview and big-picture to readers of this exciting area. This survey is concluded with a discussion of open problems and future directions.",
"title": ""
},
{
"docid": "e63a8b6595e1526a537b0881bc270542",
"text": "The CTD which stands for “Conductivity-Temperature-Depth” is one of the most used instruments for the oceanographic measurements. MEMS based CTD sensor components consist of a conductivity sensor (C), temperature sensor (T) and a piezo resistive pressure sensor (D). CTDs are found in every marine related institute and navy throughout the world as they are used to produce the salinity profile for the area of the ocean under investigation and are also used to determine different oceanic parameters. This research paper provides the design, fabrication and initial test results on a prototype CTD sensor.",
"title": ""
},
{
"docid": "9cc23cd9bfb3e422e2b4ace1fe816855",
"text": "Evaluating surgeon skill has predominantly been a subjective task. Development of objective methods for surgical skill assessment are of increased interest. Recently, with technological advances such as robotic-assisted minimally invasive surgery (RMIS), new opportunities for objective and automated assessment frameworks have arisen. In this paper, we applied machine learning methods to automatically evaluate performance of the surgeon in RMIS. Six important movement features were used in the evaluation including completion time, path length, depth perception, speed, smoothness and curvature. Different classification methods applied to discriminate expert and novice surgeons. We test our method on real surgical data for suturing task and compare the classification result with the ground truth data (obtained by manual labeling). The experimental results show that the proposed framework can classify surgical skill level with relatively high accuracy of 85.7%. This study demonstrates the ability of machine learning methods to automatically classify expert and novice surgeons using movement features for different RMIS tasks. Due to the simplicity and generalizability of the introduced classification method, it is easy to implement in existing trainers. .",
"title": ""
},
{
"docid": "ef706ea7a6dcd5b71602ea4c28eb9bd3",
"text": "\"Stealth Dicing (SD) \" was developed to solve such inherent problems of dicing process as debris contaminants and unnecessary thermal damage on work wafer. In SD, laser beam power of transmissible wavelength is absorbed only around focal point in the wafer by utilizing temperature dependence of absorption coefficient of the wafer. And these absorbed power forms modified layer in the wafer, which functions as the origin of separation in followed separation process. Since only the limited interior region of a wafer is processed by laser beam irradiation, damages and debris contaminants can be avoided in SD. Besides characteristics of devices will not be affected. Completely dry process of SD is another big advantage over other dicing methods.",
"title": ""
},
{
"docid": "ba3f3ca8a34e1ea6e54fe9dde673b51f",
"text": "This paper proposes a high-efficiency dual-band on-chip rectifying antenna (rectenna) at 35 and 94 GHz for wireless power transmission. The rectenna is designed in slotline (SL) and finite-width ground coplanar waveguide (FGCPW) transmission lines in a CMOS 0.13-μm process. The rectenna comprises a high gain linear tapered slot antenna (LTSA), an FGCPW to SL transition, a bandpass filter, and a full-wave rectifier. The LTSA achieves a VSWR=2 fractional bandwidth of 82% and 41%, and a gain of 7.4 and 6.5 dBi at the frequencies of 35 and 94 GHz. The measured power conversion efficiencies are 53% and 37% in free space at 35 and 94 GHz, while the incident radiation power density is 30 mW/cm2 . The fabricated rectenna occupies a compact size of 2.9 mm2.",
"title": ""
},
{
"docid": "4783e35e54d0c7f555015427cbdc011d",
"text": "The language of deaf and dumb which uses body parts to convey the message is known as sign language. Here, we are doing a study to convert speech into sign language used for conversation. In this area we have many developed method to recognize alphabets and numerals of ISL (Indian sign language). There are various approaches for recognition of ISL and we have done a comparative studies between them [1].",
"title": ""
},
{
"docid": "3d319572361f55dd4b91881dac2c9ace",
"text": "In this paper, a modular interleaved boost converter is first proposed by integrating a forward energy-delivering circuit with a voltage-doubler to achieve high step-up ratio and high efficiency for dc-microgrid applications. Then, steady-state analyses are made to show the merits of the proposed converter module. For closed-loop control design, the corresponding small-signal model is also derived. It is seen that, for higher power applications, more modules can be paralleled to increase the power rating and the dynamic performance. As an illustration, closed-loop control of a 450-W rating converter consisting of two paralleled modules with 24-V input and 200-V output is implemented for demonstration. Experimental results show that the modular high step-up boost converter can achieve an efficiency of 95.8% approximately.",
"title": ""
},
{
"docid": "5a62c276e7cce7c7a10109f3c3b1e401",
"text": "A miniature coplanar antenna on a perovskite substrate is analyzed and designed using short circuit technique. The overall dimensions are minimized to 0.09 λ × 0.09 λ. The antenna geometry, the design concept, as well as the simulated and the measured results are discussed in this paper.",
"title": ""
},
{
"docid": "d9aadb86785057ae5445dc894b1ef7a7",
"text": "This paper presents Circe, an environment for the analysis of natural language requirements. Circe is first presented in terms of its architecture, based on a transformational paradigm. Details are then given for the various transformation steps, including (i) a novel technique for parsing natural language requirements, and (ii) an expert system based on modular agents, embodying intensional knowledge about software systems in general. The result of all the transformations is a set of models for the requirements document, for the system described by the requirements, and for the requirements writing process. These models can be inspected, measured, and validated against a given set of criteria. Some of the features of the environment are shown by means of an example. Various stages of requirements analysis are covered, from initial sketches to pseudo-code and UML models.",
"title": ""
},
{
"docid": "b58c1e18a792974f57e9f676c1495826",
"text": "The influence of bilingualism on cognitive test performance in older adults has received limited attention in the neuropsychology literature. The aim of this study was to examine the impact of bilingualism on verbal fluency and repetition tests in older Hispanic bilinguals. Eighty-two right-handed participants (28 men and 54 women) with a mean age of 61.76 years (SD = 9.30; range = 50-84) and a mean educational level of 14.8 years (SD = 3.6; range 2-23) were selected. Forty-five of the participants were English monolinguals, 18 were Spanish monolinguals, and 19 were Spanish-English bilinguals. Verbal fluency was tested by electing a verbal description of a picture and by asking participants to generate words within phonemic and semantic categories. Repetition was tested using a sentence-repetition test. The bilinguals' test scores were compared to English monolinguals' and Spanish monolinguals' test scores. Results demonstrated equal performance of bilingual and monolingual participants in all tests except that of semantic verbal fluency. Bilinguals who learned English before age 12 performed significantly better on the English repetition test and produced a higher number of words in the description of a picture than the bilinguals who learned English after age 12. Variables such as task demands, language interference, linguistic mode, and level of bilingualism are addressed in the Discussion section.",
"title": ""
}
] | scidocsrr |
a4e6969885378a9a417b58a5ddf66d67 | Circularly Polarized Substrate-Integrated Waveguide Tapered Slot Antenna for Millimeter-Wave Applications | [
{
"docid": "e50355a29533bc7a91468aae1053873d",
"text": "A substrate integrated waveguide (SIW)-fed circularly polarized (CP) antenna array with a broad bandwidth of axial ratio (AR) is presented for 60-GHz wireless personal area networks (WPAN) applications. The widened AR bandwidth of an antenna element is achieved by positioning a slot-coupled rotated strip above a slot cut onto the broadwall of an SIW. A 4 × 4 antenna array is designed and fabricated using low temperature cofired ceramic (LTCC) technology. A metal-topped via fence is introduced around the strip to reduce the mutual coupling between the elements of the array. The measured results show that the AR bandwidth is more than 7 GHz. A stable boresight gain is greater than 12.5 dBic across the desired bandwidth of 57-64 GHz.",
"title": ""
},
{
"docid": "e43ede0fe674fe92fbfa2f76165cf034",
"text": "In this communication, a compact circularly polarized (CP) substrate integrated waveguide (SIW) horn antenna is proposed and investigated. Through etching a sloping slot on the common broad wall of two SIWs, mode coupling is generated between the top and down SIWs, and thus, a new field component as TE01 mode is produced. During the coupling process along the sloping slot, the difference in guide wavelengths of the two orthogonal modes also brings a phase shift between the two modes, which provides a possibility for radiating the CP wave. Moreover, the two different ports will generate the electric field components of TE01 mode with the opposite direction, which indicates the compact SIW horn antenna with a dual CP property can be realized as well. Measured results indicate that the proposed antenna operates with a wide 3-dB axial ratio bandwidth of 11.8% ranging from 17.6 to 19.8 GHz. The measured results are in good accordance with the simulated ones.",
"title": ""
}
] | [
{
"docid": "37f157cdcd27c1647548356a5194f2bc",
"text": "Purpose – The aim of this paper is to propose a novel evaluation framework to explore the “root causes” that hinder the acceptance of using internal cloud services in a university. Design/methodology/approach – The proposed evaluation framework incorporates the duo-theme DEMATEL (decision making trial and evaluation laboratory) with TAM (technology acceptance model). The operational procedures were proposed and tested on a university during the post-implementation phase after introducing the internal cloud services. Findings – According to the results, clear understanding and operational ease under the theme perceived ease of use (PEOU) are more imperative; whereas improved usefulness and productivity under the theme perceived usefulness (PU) are more urgent to foster the usage of internal clouds in the case university. Research limitations/implications – Based on the findings, some intervention activities were suggested to enhance the level of users’ acceptance of internal cloud solutions in the case university. However, the results should not be generalized to apply to other educational establishments. Practical implications – To reduce the resistance from using internal clouds, some necessary intervention activities such as developing attractive training programs, creating interesting workshops, and rewriting user friendly manual or handbook are recommended. Originality/value – The novel two-theme DEMATEL has greatly contributed to the conventional one-theme DEMATEL theory. The proposed two-theme DEMATEL procedures were the first attempt to evaluate the acceptance of using internal clouds in university. The results have provided manifest root-causes under two distinct themes, which help derive effectual intervention activities to foster the acceptance of usage of internal clouds in a university.",
"title": ""
},
{
"docid": "55f11df001ffad95e07cd20b3b27406d",
"text": "CNNs have proven to be a very successful yet computationally expensive technique which made them slow to be adopted in mobile and embedded systems. There is a number of possible optimizations: minimizing the memory footprint, using lower precision and approximate computation, reducing computation cost of convolutions with FFTs. These have been explored recently and were shown to work. This project take ideas of using FFTs further and develops an alternative way to computing CNN – purely in frequency domain. As a side result it develops intuition about nonlinear elements: why do they work and how new types can be created.",
"title": ""
},
{
"docid": "865306ad6f5288cf62a4082769e8068a",
"text": "The rapid growth of the Internet has brought with it an exponential increase in the type and frequency of cyber attacks. Many well-known cybersecurity solutions are in place to counteract these attacks. However, the generation of Big Data over computer networks is rapidly rendering these traditional solutions obsolete. To cater for this problem, corporate research is now focusing on Security Analytics, i.e., the application of Big Data Analytics techniques to cybersecurity. Analytics can assist network managers particularly in the monitoring and surveillance of real-time network streams and real-time detection of both malicious and suspicious (outlying) patterns. Such a behavior is envisioned to encompass and enhance all traditional security techniques. This paper presents a comprehensive survey on the state of the art of Security Analytics, i.e., its description, technology, trends, and tools. It hence aims to convince the reader of the imminent application of analytics as an unparalleled cybersecurity solution in the near future.",
"title": ""
},
{
"docid": "ac08d20a1430ee10c7ff761cae9d9ada",
"text": "OBJECTIVES\nTo evaluate the clinical response at 12 month in a cohort of patients with rheumatoid arthritis treated with Etanar (rhTNFR:Fc), and to register the occurrence of adverse effects.\n\n\nMETHODS\nThis is a multicentre observational cohort study. It included patients over 18 years of age with an active rheumatoid arthritis diagnosis for which the treating physician had begun a treatment scheme of 25 mg of subcutaneous etanercept (Etanar ® 25 mg: biologic type rhTNFR:Fc), twice per week. Follow-up was done during 12 months, with assessments at weeks 12, 24, 36 and 48. Evaluated outcomes included tender joint count, swollen joint count, ACR20, ACR50, ACR70, HAQ and DAS28.\n\n\nRESULTS\nOne-hundred and five (105) subjects were entered into the cohort. The median of tender and swollen joint count, ranged from 19 and 14, respectively at onset to 1 at the 12th month. By month 12, 90.5% of the subjects reached ACR20, 86% ACR50, and 65% ACR70. The median of DAS28 went from 4.7 to 2, and the median HAQ went from 1.3 to 0.2. The rate of adverse effects was 14 for every 100 persons per year. No serious adverse effects were reported. The most frequent were pruritus (5 cases), and rhinitis (3 cases).\n\n\nCONCLUSIONS\nAfter a year of following up a patient cohort treated with etanercept 25 mg twice per week, significant clinical results were observed, resulting in adequate disease control in a high percentage of patients with an adequate level of safety.",
"title": ""
},
{
"docid": "85b169515b4e4b86117abcdd83f002ea",
"text": "While Bitcoin (Peer-to-Peer Electronic Cash) [Nak]solved the double spend problem and provided work withtimestamps on a public ledger, it has not to date extendedthe functionality of a blockchain beyond a transparent andpublic payment system. Satoshi Nakamoto's original referenceclient had a decentralized marketplace service which was latertaken out due to a lack of resources [Deva]. We continued withNakamoto's vision by creating a set of commercial-grade ser-vices supporting a wide variety of business use cases, includinga fully developed blockchain-based decentralized marketplace,secure data storage and transfer, and unique user aliases thatlink the owner to all services controlled by that alias.",
"title": ""
},
{
"docid": "6a64d064220681e83751938ce0190151",
"text": "Forensic dentistry can be defined in many ways. One of the more elegant definitions is simply that forensic dentistry represents the overlap between the dental and the legal professions. This two-part series presents the field of forensic dentistry by outlining two of the major aspects of the profession: human identification and bite marks. This first paper examines the use of the human dentition and surrounding structures to enable the identification of found human remains. Conventional and novel techniques are presented.",
"title": ""
},
{
"docid": "2f20f587bb46f7133900fd8c22cea3ab",
"text": "Recent years have witnessed the significant advance in fine-grained visual categorization, which targets to classify the objects belonging to the same species. To capture enough subtle visual differences and build discriminative visual description, most of the existing methods heavily rely on the artificial part annotations, which are expensive to collect in real applications. Motivated to conquer this issue, this paper proposes a multi-level coarse-to-fine object description. This novel description only requires the original image as input, but could automatically generate visual descriptions discriminative enough for fine-grained visual categorization. This description is extracted from five sources representing coarse-to-fine visual clues: 1) original image is used as the source of global visual clue; 2) object bounding boxes are generated using convolutional neural network (CNN); 3) with the generated bounding box, foreground is segmented using the proposed k nearest neighbour-based co-segmentation algorithm; and 4) two types of part segmentations are generated by dividing the foreground with an unsupervised part learning strategy. The final description is generated by feeding these sources into CNN models and concatenating their outputs. Experiments on two public benchmark data sets show the impressive performance of this coarse-to-fine description, i.e., classification accuracy achieves 82.5% on CUB-200-2011, and 86.9% on fine-grained visual categorization-Aircraft, respectively, which outperform many recent works.",
"title": ""
},
{
"docid": "c75388c19397bf1e743970cb32649b17",
"text": "In recent years, there has been a substantial amount of work on large-scale data analytics using Hadoop-based platforms running on large clusters of commodity machines. A lessexplored topic is how those data, dominated by application logs, are collected and structured to begin with. In this paper, we present Twitter’s production logging infrastructure and its evolution from application-specific logging to a unified “client events” log format, where messages are captured in common, well-formatted, flexible Thrift messages. Since most analytics tasks consider the user session as the basic unit of analysis, we pre-materialize “session sequences”, which are compact summaries that can answer a large class of common queries quickly. The development of this infrastructure has streamlined log collection and data analysis, thereby improving our ability to rapidly experiment and iterate on various aspects of the service.",
"title": ""
},
{
"docid": "a5e4199c16668f66656474f4eeb5d663",
"text": "Advances in information technology, particularly in the e-business arena, are enabling firms to rethink their supply chain strategies and explore new avenues for inter-organizational cooperation. However, an incomplete understanding of the value of information sharing and physical flow coordination hinder these efforts. This research attempts to help fill these gaps by surveying prior research in the area, categorized in terms of information sharing and flow coordination. We conclude by highlighting gaps in the current body of knowledge and identifying promising areas for future research. Subject Areas: e-Business, Inventory Management, Supply Chain Management, and Survey Research.",
"title": ""
},
{
"docid": "2f8eb33eed4aabce1d31f8b7dfe8e7de",
"text": "A pre-trained convolutional deep neural network (CNN) is a feed-forward computation perspective, which is widely used for the embedded systems, requires high power-and-area efficiency. This paper realizes a binarized CNN which treats only binary 2-values (+1/-1) for the inputs and the weights. In this case, the multiplier is replaced into an XNOR circuit instead of a dedicated DSP block. For hardware implementation, using binarized inputs and weights is more suitable. However, the binarized CNN requires the batch normalization techniques to retain the classification accuracy. In that case, the additional multiplication and addition require extra hardware, also, the memory access for its parameters reduces system performance. In this paper, we propose the batch normalization free CNN which is mathematically equivalent to the CNN using batch normalization. The proposed CNN treats the binarized inputs and weights with the integer bias. We implemented the VGG-16 benchmark CNN on the NetFPGA-SUME FPGA board, which has the Xilinx Inc. Virtex7 FPGA and three off-chip QDR II+ Synchronous SRAMs. Compared with the conventional FPGA realizations, although the classification error rate is 6.5% decayed, the performance is 2.82 times faster, the power efficiency is 1.76 times lower, and the area efficiency is 11.03 times smaller. Thus, our method is suitable for the embedded computer system.",
"title": ""
},
{
"docid": "6c8a6a1713473ae94d610891d917133f",
"text": "68 Computer Music Journal As digitization and information technologies advance, document analysis and optical-characterrecognition technologies have become more widely used. Optical Music Recognition (OMR), also commonly known as OCR (Optical Character Recognition) for Music, was first attempted in the 1960s (Pruslin 1966). Standard OCR techniques cannot be used in music-score recognition, because music notation has a two-dimensional structure. In a staff, the horizontal position denotes different durations of notes, and the vertical position defines the height of the note (Roth 1994). Models for nonmusical OCR assessment have been proposed and largely used (Kanai et al. 1995; Ventzislav 2003). An ideal system that could reliably read and “understand” music notation could be used in music production for educational and entertainment applications. OMR is typically used today to accelerate the conversion from image music sheets into a symbolic music representation that can be manipulated, thus creating new and revised music editions. Other applications use OMR systems for educational purposes (e.g., IMUTUS; see www.exodus.gr/imutus), generating customized versions of music exercises. A different use involves the extraction of symbolic music representations to be used as incipits or as descriptors in music databases and related retrieval systems (Byrd 2001). OMR systems can be classified on the basis of the granularity chosen to recognize the music score’s symbols. The architecture of an OMR system is tightly related to the methods used for symbol extraction, segmentation, and recognition. Generally, the music-notation recognition process can be divided into four main phases: (1) the segmentation of the score image to detect and extract symbols; (2) the recognition of symbols; (3) the reconstruction of music information; and (4) the construction of the symbolic music notation model to represent the information (Bellini, Bruno, and Nesi 2004). Music notation may present very complex constructs and several styles. This problem has been recently addressed by the MUSICNETWORK and Motion Picture Experts Group (MPEG) in their work on Symbolic Music Representation (www .interactivemusicnetwork.org/mpeg-ahg). Many music-notation symbols exist, and they can be combined in different ways to realize several complex configurations, often without using well-defined formatting rules (Ross 1970; Heussenstamm 1987). Despite various research systems for OMR (e.g., Prerau 1970; Tojo and Aoyama 1982; Rumelhart, Hinton, and McClelland 1986; Fujinaga 1988, 1996; Carter 1989, 1994; Kato and Inokuchi 1990; Kobayakawa 1993; Selfridge-Field 1993; Ng and Boyle 1994, 1996; Coüasnon and Camillerapp 1995; Bainbridge and Bell 1996, 2003; Modayur 1996; Cooper, Ng, and Boyle 1997; Bellini and Nesi 2001; McPherson 2002; Bruno 2003; Byrd 2006) as well as commercially available products, optical music recognition—and more generally speaking, music recognition—is a research field affected by many open problems. The meaning of “music recognition” changes depending on the kind of applications and goals (Blostein and Carter 1992): audio generation from a musical score, music indexing and searching in a library database, music analysis, automatic transcription of a music score into parts, transcoding a score into interchange data formats, etc. For such applications, we must employ common tools to provide answers to questions such as “What does a particular percentagerecognition rate that is claimed by this particular algorithm really mean?” and “May I invoke a common methodology to compare different OMR tools on the basis of my music?” As mentioned in Blostein and Carter (1992) and Miyao and Haralick (2000), there is no standard for expressing the results of the OMR process. Assessing Optical Music Recognition Tools",
"title": ""
},
{
"docid": "f4422ff5d89e2035d6480f6bc6eb5fb2",
"text": "Hashing, or learning binary embeddings of data, is frequently used in nearest neighbor retrieval. In this paper, we develop learning to rank formulations for hashing, aimed at directly optimizing ranking-based evaluation metrics such as Average Precision (AP) and Normalized Discounted Cumulative Gain (NDCG). We first observe that the integer-valued Hamming distance often leads to tied rankings, and propose to use tie-aware versions of AP and NDCG to evaluate hashing for retrieval. Then, to optimize tie-aware ranking metrics, we derive their continuous relaxations, and perform gradient-based optimization with deep neural networks. Our results establish the new state-of-the-art for image retrieval by Hamming ranking in common benchmarks.",
"title": ""
},
{
"docid": "2031114bd1dc1a3ca94bdd8a13ad3a86",
"text": "Crude extracts of curcuminoids and essential oil of Curcuma longa varieties Kasur, Faisalabad and Bannu were studied for their antibacterial activity against 4 bacterial strains viz., Bacillus subtilis, Bacillus macerans, Bacillus licheniformis and Azotobacter using agar well diffusion method. Solvents used to determine antibacterial activity were ethanol and methanol. Ethanol was used for the extraction of curcuminoids. Essential oil was extracted by hydrodistillation and diluted in methanol by serial dilution method. Both Curcuminoids and oil showed zone of inhibition against all tested strains of bacteria. Among all the three turmeric varieties, Kasur variety had the most inhibitory effect on the growth of all bacterial strains tested as compared to Faisalabad and Bannu varieties. Among all the bacterial strains B. subtilis was the most sensitive to turmeric extracts of curcuminoids and oil. The MIC value for different strains and varieties ranged from 3.0 to 20.6 mm in diameter.",
"title": ""
},
{
"docid": "f1ce50e0b787c1d10af44252b3a7e656",
"text": "This paper proposes a scalable approach for distinguishing malicious files from clean files by investigating the behavioural features using logs of various API calls. We also propose, as an alternative to the traditional method of manually identifying malware files, an automated classification system using runtime features of malware files. For both projects, we use an automated tool running in a virtual environment to extract API call features from executables and apply pattern recognition algorithms and statistical methods to differentiate between files. Our experimental results, based on a dataset of 1368 malware and 456 cleanware files, provide an accuracy of over 97% in distinguishing malware from cleanware. Our techniques provide a similar accuracy for classifying malware into families. In both cases, our results outperform comparable previously published techniques.",
"title": ""
},
{
"docid": "9ae435f5169e867dc9d4dc0da56ec9fb",
"text": "Renewable energy is currently the main direction of development of electric power. Because of its own characteristics, the reliability of renewable energy generation is low. Renewable energy generation system needs lots of energy conversion devices which are made of power electronic devices. Too much power electronic components can damage power quality in microgrid. High Frequency AC (HFAC) microgrid is an effective way to solve the problems of renewable energy generation system. Transmitting electricity by means of HFAC is a novel idea in microgrid. Although the HFAC will cause more loss of power, it can improve the power quality in microgrid. HFAC can also reduce the impact of fluctuations of renewable energy in microgrid. This paper mainly simulates the HFAC with Matlab/Simulink and analyzes the feasibility of HFAC in microgrid.",
"title": ""
},
{
"docid": "aece5f900543384df4464c6c0cd431d0",
"text": "AIM\nThe aim of the study was to evaluate the bleaching effect, morphological changes, and variations in calcium (Ca) and phosphate (P) in the enamel with hydrogen peroxide (HP) and carbamide peroxide (CP) after the use of different application regimens.\n\n\nMATERIALS AND METHODS\nFour groups of five teeth were randomly assigned, according to the treatment protocol: HP 37.5% applied for 30 or 60 minutes (HP30, HP60), CP 16% applied for 14 or 28 hours (CP14, CP28). Changes in dental color were evaluated, according to the following formula: ΔE = [(La-Lb)2+(aa-ab)2 + (ba-bb)2]1/2. Enamel morphology and Ca and P compositions were evaluated by confocal laser scanning microscope and environmental scanning electron microscopy.\n\n\nRESULTS\nΔE HP30 was significantly greater than CP14 (10.37 ± 2.65/8.56 ± 1.40), but not between HP60 and CP28. HP60 shows greater morphological changes than HP30. No morphological changes were observed in the groups treated with CP. The reduction in Ca and P was significantly greater in HP60 than in CP28 (p < 0.05).\n\n\nCONCLUSION\nBoth formulations improved tooth color; HP produced morphological changes and Ca and P a gradual decrease, while CP produced no morphological changes, and the decrease in mineral component was smaller.\n\n\nCLINICAL SIGNIFICANCE\nCP 16% applied during 2 weeks could be equally effective and safer for tooth whitening than to administer two treatment sessions with HP 37.5%.",
"title": ""
},
{
"docid": "43a668b9f37492f8b6657929b679b6e5",
"text": "Wireless multimedia sensor networks (WMSNs) attracts significant attention in the field of agriculture where disease detection plays an important role. To improve the cultivation yield of plants it is necessary to detect the onset of diseases in plants and provide advice to farmers who will act based on the received suggestion. Due to the limitations of WMSN, it is necessary to design a simple system which can provide higher accuracy with less complexity. In this paper a novel disease detection system (DDS) is proposed to detect and classify the diseases in leaves. Statistical based thresholding strategy is proposed for segmentation which is less complex compared to k-means clustering method. The features extracted from the segmented image will be transmitted through sensor nodes to the monitoring site where the analysis and classification is done using Support Vector Machine classifier. The performance of the proposed DDS has been evaluated in terms of accuracy and is compared with the existing k-means clustering technique. The results show that the proposed method provides an overall accuracy of around 98%. The transmission energy is also analyzed in real time using TelosB nodes.",
"title": ""
},
{
"docid": "d3cfa1f05310b89067f85b115eb593e8",
"text": "NK fitness landscapes are stochastically generated fitness functions on bit strings, parameterized (with genes and interactions between genes) so as to make them tunably ‘rugged’. Under the ‘natural’ genetic operators of bit-flipping mutation or recombination, NK landscapes produce multiple domains of attraction for the evolutionary dynamics. NK landscapes have been used in models of epistatic gene interactions, coevolution, genome growth, and Wright’s shifting balance model of adaptation. Theory for adaptive walks on NK landscapes has been derived, and generalizations that extend beyond Kauffman’s original framework have been utilized in these applications.",
"title": ""
},
{
"docid": "cbbb2c0a9d2895c47c488bed46d8f468",
"text": "We propose a new generative language model for sentences that first samples a prototype sentence from the training corpus and then edits it into a new sentence. Compared to traditional language models that generate from scratch either left-to-right or by first sampling a latent sentence vector, our prototype-then-edit model improves perplexity on language modeling and generates higher quality outputs according to human evaluation. Furthermore, the model gives rise to a latent edit vector that captures interpretable semantics such as sentence similarity and sentence-level analogies.",
"title": ""
},
{
"docid": "831845dfb48d2bd9d7d86031f3862fa5",
"text": "This paper presents the analysis and implementation of an LCLC resonant converter working as maximum power point tracker (MPPT) in a PV system. This converter must guarantee a constant DC output voltage and must vary its effective input resistance in order to extract the maximum power of the PV generator. Preliminary analysis concludes that not all resonant load topologies can achieve the design conditions for a MPPT. Only the LCLC and LLC converter are suitable for this purpose.",
"title": ""
}
] | scidocsrr |
34330a7b716612a45a2972cf020b7b37 | Towards a Reduced-Wire Interface for CMUT-Based Intravascular Ultrasound Imaging Systems | [
{
"docid": "ffadf882ac55d9cb06b77b3ce9a6ad8c",
"text": "Three experimental techniques based on automatic swept-frequency network and impedance analysers were used to measure the dielectric properties of tissue in the frequency range 10 Hz to 20 GHz. The technique used in conjunction with the impedance analyser is described. Results are given for a number of human and animal tissues, at body temperature, across the frequency range, demonstrating that good agreement was achieved between measurements using the three pieces of equipment. Moreover, the measured values fall well within the body of corresponding literature data.",
"title": ""
},
{
"docid": "170a1dba20901d88d7dc3988647e8a22",
"text": "This paper discusses two antennas monolithically integrated on-chip to be used respectively for wireless powering and UWB transmission of a tag designed and fabricated in 0.18-μm CMOS technology. A multiturn loop-dipole structure with inductive and resistive stubs is chosen for both antennas. Using these on-chip antennas, the chip employs asymmetric communication links: at downlink, the tag captures the required supply wirelessly from the received RF signal transmitted by a reader and, for the uplink, ultra-wideband impulse-radio (UWB-IR), in the 3.1-10.6-GHz band, is employed instead of backscattering to achieve extremely low power and a high data rate up to 1 Mb/s. At downlink with the on-chip power-scavenging antenna and power-management unit circuitry properly designed, 7.5-cm powering distance has been achieved, which is a huge improvement in terms of operation distance compared with other reported tags with on-chip antenna. Also, 7-cm operating distance is achieved with the implemented on-chip UWB antenna. The tag can be powered up at all the three ISM bands of 915 MHz and 2.45 GHz, with off-chip antennas, and 5.8 GHz with the integrated on-chip antenna. The tag receives its clock and the commands wirelessly through the modulated RF powering-up signal. Measurement results show that the tag can operate up to 1 Mb/s data rate with a minimum input power of -19.41 dBm at 915-MHz band, corresponding to 15.7 m of operation range with an off-chip 0-dB gain antenna. This is a great improvement compared with conventional passive RFIDs in term of data rate and operation distance. The power consumption of the chip is measured to be just 16.6 μW at the clock frequency of 10 MHz at 1.2-V supply. In addition, in this paper, for the first time, the radiation pattern of an on-chip antenna at such a frequency is measured. The measurement shows that the antenna has an almost omnidirectional radiation pattern so that the chip's performance is less direction-dependent.",
"title": ""
}
] | [
{
"docid": "01ed88c12ed9b2ca96cdf46700005493",
"text": "Using soft tissue fillers to correct postrhinoplasty deformities in the nose is appealing. Fillers are minimally invasive and can potentially help patients who are concerned with the financial expense, anesthetic risk, or downtime generally associated with a surgical intervention. A variety of filler materials are currently available and have been used for facial soft tissue augmentation. Of these, hyaluronic acid (HA) derivatives, calcium hydroxylapatite gel (CaHA), and silicone have most frequently been used for treating nasal deformities. While effective, silicone is known to cause severe granulomatous reactions in some patients and should be avoided. HA and CaHA are likely safer, but still may occasionally lead to complications such as infection, thinning of the skin envelope, and necrosis. Nasal injection technique must include sub-SMAS placement to eliminate visible or palpable nodularity. Restricting the use of fillers to the nasal dorsum and sidewalls minimizes complications because more adverse events occur after injections to the nasal tip and alae. We believe that HA and CaHA are acceptable for the treatment of postrhinoplasty deformities in carefully selected patients; however, patients who are treated must be followed closely for complications. The use of any soft tissue filler in the nose should always be approached with great caution and with a thorough consideration of a patient's individual circumstances.",
"title": ""
},
{
"docid": "36c4b2ab451c24d2d0d6abcbec491116",
"text": "A key advantage of scientific workflow systems over traditional scripting approaches is their ability to automatically record data and process dependencies introduced during workflow runs. This information is often represented through provenance graphs, which can be used by scientists to better understand, reproduce, and verify scientific results. However, while most systems record and store data and process dependencies, few provide easy-to-use and efficient approaches for accessing and querying provenance information. Instead, users formulate provenance graph queries directly against physical data representations (e.g., relational, XML, or RDF), leading to queries that are difficult to express and expensive to evaluate. We address these problems through a high-level query language tailored for expressing provenance graph queries. The language is based on a general model of provenance supporting scientific workflows that process XML data and employ update semantics. Query constructs are provided for querying both structure and lineage information. Unlike other languages that return sets of nodes as answers, our query language is closed, i.e., answers to lineage queries are sets of lineage dependencies (edges) allowing answers to be further queried. We provide a formal semantics for the language and present novel techniques for efficiently evaluating lineage queries. Experimental results on real and synthetic provenance traces demonstrate that our lineage based optimizations outperform an in-memory and standard database implementation by orders of magnitude. We also show that our strategies are feasible and can significantly reduce both provenance storage size and query execution time when compared with standard approaches.",
"title": ""
},
{
"docid": "c65f050e911abb4b58b4e4f9b9aec63b",
"text": "The abundant spatial and contextual information provided by the advanced remote sensing technology has facilitated subsequent automatic interpretation of the optical remote sensing images (RSIs). In this paper, a novel and effective geospatial object detection framework is proposed by combining the weakly supervised learning (WSL) and high-level feature learning. First, deep Boltzmann machine is adopted to infer the spatial and structural information encoded in the low-level and middle-level features to effectively describe objects in optical RSIs. Then, a novel WSL approach is presented to object detection where the training sets require only binary labels indicating whether an image contains the target object or not. Based on the learnt high-level features, it jointly integrates saliency, intraclass compactness, and interclass separability in a Bayesian framework to initialize a set of training examples from weakly labeled images and start iterative learning of the object detector. A novel evaluation criterion is also developed to detect model drift and cease the iterative learning. Comprehensive experiments on three optical RSI data sets have demonstrated the efficacy of the proposed approach in benchmarking with several state-of-the-art supervised-learning-based object detection approaches.",
"title": ""
},
{
"docid": "09d7b14190056f357aa24ca7db71a74c",
"text": "Thirty-six blast-exposed patients and twenty-nine non-blast-exposed control subjects were tested on a battery of behavioral and electrophysiological tests that have been shown to be sensitive to central auditory processing deficits. Abnormal performance among the blast-exposed patients was assessed with reference to normative values established as the mean performance on each test by the control subjects plus or minus two standard deviations. Blast-exposed patients performed abnormally at rates significantly above that which would occur by chance on three of the behavioral tests of central auditory processing: the Gaps-In-Noise, Masking Level Difference, and Staggered Spondaic Words tests. The proportion of blast-exposed patients performing abnormally on a speech-in-noise test (Quick Speech-In-Noise) was also significantly above that expected by chance. These results suggest that, for some patients, blast exposure may lead to difficulties with hearing in complex auditory environments, even when peripheral hearing sensitivity is near normal limits.",
"title": ""
},
{
"docid": "54ab143dc18413c58c20612dbae142eb",
"text": "Elderly adults may master challenging cognitive demands by additionally recruiting the cross-hemispheric counterparts of otherwise unilaterally engaged brain regions, a strategy that seems to be at odds with the notion of lateralized functions in cerebral cortex. We wondered whether bilateral activation might be a general coping strategy that is independent of age, task content and brain region. While using functional magnetic resonance imaging (fMRI), we pushed young and old subjects to their working memory (WM) capacity limits in verbal, spatial, and object domains. Then, we compared the fMRI signal reflecting WM maintenance between hemispheric counterparts of various task-relevant cerebral regions that are known to exhibit lateralization. Whereas language-related areas kept their lateralized activation pattern independent of age in difficult tasks, we observed bilaterality in dorsolateral and anterior prefrontal cortex across WM domains and age groups. In summary, the additional recruitment of cross-hemispheric counterparts seems to be an age-independent domain-general strategy to master cognitive challenges. This phenomenon is largely confined to prefrontal cortex, which is arguably less specialized and more flexible than other parts of the brain.",
"title": ""
},
{
"docid": "a77517d692ec646474a5c77b9f188ef0",
"text": "Accurate segmentation of the heart is an important step towards evaluating cardiac function. In this paper, we present a fully automated framework for segmentation of the left (LV) and right (RV) ventricular cavities and the myocardium (Myo) on short-axis cardiac MR images. We investigate various 2D and 3D convolutional neural network architectures for this task. Experiments were performed on the ACDC 2017 challenge training dataset comprising cardiac MR images of 100 patients, where manual reference segmentations were made available for end-diastolic (ED) and end-systolic (ES) frames. We find that processing the images in a slice-by-slice fashion using 2D networks is beneficial due to a relatively large slice thickness. However, the exact network architecture only plays a minor role. We report mean Dice coefficients of 0.950 (LV), 0.893 (RV), and 0.899 (Myo), respectively with an average evaluation time of 1.1 seconds per volume on a modern GPU.",
"title": ""
},
{
"docid": "6e9e687db8f202a8fa6d49c5996e7141",
"text": "Although various scalable deep learning software packages have been proposed, it remains unclear how to best leverage parallel and distributed computing infrastructure to accelerate their training and deployment. Moreover, the effectiveness of existing parallel and distributed systems varies widely based on the neural network architecture and dataset under consideration. In order to efficiently explore the space of scalable deep learning systems and quickly diagnose their effectiveness for a given problem instance, we introduce an analytical performance model called PALEO. Our key observation is that a neural network architecture carries with it a declarative specification of the computational requirements associated with its training and evaluation. By extracting these requirements from a given architecture and mapping them to a specific point within the design space of software, hardware and communication strategies, PALEO can efficiently and accurately model the expected scalability and performance of a putative deep learning system. We show that PALEO is robust to the choice of network architecture, hardware, software, communication schemes, and parallelization strategies. We further demonstrate its ability to accurately model various recently published scalability results for CNNs such as NiN, Inception and AlexNet.",
"title": ""
},
{
"docid": "c5c46fb727ff9447ebe75e3625ad375b",
"text": "Plenty of face detection and recognition methods have been proposed and got delightful results in decades. Common face recognition pipeline consists of: 1) face detection, 2) face alignment, 3) feature extraction, 4) similarity calculation, which are separated and independent from each other. The separated face analyzing stages lead the model redundant calculation and are hard for end-to-end training. In this paper, we proposed a novel end-to-end trainable convolutional network framework for face detection and recognition, in which a geometric transformation matrix was directly learned to align the faces, instead of predicting the facial landmarks. In training stage, our single CNN model is supervised only by face bounding boxes and personal identities, which are publicly available from WIDER FACE [36] dataset and CASIA-WebFace [37] dataset. Tested on Face Detection Dataset and Benchmark (FDDB) [11] dataset and Labeled Face in the Wild (LFW) [9] dataset, we have achieved 89.24% recall for face detection task and 98.63% verification accuracy for face recognition task simultaneously, which are comparable to state-of-the-art results.",
"title": ""
},
{
"docid": "dcad8812d2d5f22cd940f45ce64fb16b",
"text": "Bioinformatics software quality assurance is essential in genomic medicine. Systematic verification and validation of bioinformatics software is difficult because it is often not possible to obtain a realistic \"gold standard\" for systematic evaluation. Here we apply a technique that originates from the software testing literature, namely Metamorphic Testing (MT), to systematically test three widely used short-read sequence alignment programs. MT alleviates the problems associated with the lack of gold standard by checking that the results from multiple executions of a program satisfy a set of expected or desirable properties that can be derived from the software specification or user expectations. We tested BWA, Bowtie and Bowtie2 using simulated data and one HapMap dataset. It is interesting to observe that multiple executions of the same aligner using slightly modified input FASTQ sequence file, such as after randomly re-ordering of the reads, may affect alignment results. Furthermore, we found that the list of variant calls can be affected unless strict quality control is applied during variant calling. Thorough testing of bioinformatics software is important in delivering clinical genomic medicine. This paper demonstrates a different framework to test a program that involves checking its properties, thus greatly expanding the number and repertoire of test cases we can apply in practice.",
"title": ""
},
{
"docid": "8dcb0f20c000a30c0d3330f6ac6b373b",
"text": "Although social networking sites (SNSs) have attracted increased attention and members in recent years, there has been little research on it: particularly on how a users’ extroversion or introversion can affect their intention to pay for these services and what other factors might influence them. We therefore proposed and tested a model that measured the users’ value and satisfaction perspectives by examining the influence of these factors in an empirical survey of 288 SNS members. At the same time, the differences due to their psychological state were explored. The causal model was validated using PLSGraph 3.0; six out of eight study hypotheses were supported. The results indicated that perceived value significantly influenced the intention to pay SNS subscription fees while satisfaction did not. Moreover, extroverts thought more highly of the social value of the SNS, while introverts placed more importance on its emotional and price value. The implications of these findings are discussed. Crown Copyright 2010 Published by Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "c851bad8a1f7c8526d144453b3f2aa4f",
"text": "Taxonomies of person characteristics are well developed, whereas taxonomies of psychologically important situation characteristics are underdeveloped. A working model of situation perception implies the existence of taxonomizable dimensions of psychologically meaningful, important, and consequential situation characteristics tied to situation cues, goal affordances, and behavior. Such dimensions are developed and demonstrated in a multi-method set of 6 studies. First, the \"Situational Eight DIAMONDS\" dimensions Duty, Intellect, Adversity, Mating, pOsitivity, Negativity, Deception, and Sociality (Study 1) are established from the Riverside Situational Q-Sort (Sherman, Nave, & Funder, 2010, 2012, 2013; Wagerman & Funder, 2009). Second, their rater agreement (Study 2) and associations with situation cues and goal/trait affordances (Studies 3 and 4) are examined. Finally, the usefulness of these dimensions is demonstrated by examining their predictive power of behavior (Study 5), particularly vis-à-vis measures of personality and situations (Study 6). Together, we provide extensive and compelling evidence that the DIAMONDS taxonomy is useful for organizing major dimensions of situation characteristics. We discuss the DIAMONDS taxonomy in the context of previous taxonomic approaches and sketch future research directions.",
"title": ""
},
{
"docid": "b7d13c090e6d61272f45b1e3090f0341",
"text": "Deep Neural Networks (DNN) have achieved state-of-the-art results in a wide range of tasks, with the best results obtained with large training sets and large models. In the past, GPUs enabled these breakthroughs because of their greater computational speed. In the future, faster computation at both training and test time is likely to be crucial for further progress and for consumer applications on low-power devices. As a result, there is much interest in research and development of dedicated hardware for Deep Learning (DL). Binary weights, i.e., weights which are constrained to only two possible values (e.g. -1 or 1), would bring great benefits to specialized DL hardware by replacing many multiply-accumulate operations by simple accumulations, as multipliers are the most space and powerhungry components of the digital implementation of neural networks. We introduce BinaryConnect, a method which consists in training a DNN with binary weights during the forward and backward propagations, while retaining precision of the stored weights in which gradients are accumulated. Like other dropout schemes, we show that BinaryConnect acts as regularizer and we obtain near state-of-the-art results with BinaryConnect on the permutation-invariant MNIST, CIFAR-10 and SVHN.",
"title": ""
},
{
"docid": "96db5cbe83ce9fbee781b8cc26d97fc8",
"text": "We present a novel method to obtain a 3D Euclidean reconstruction of both the background and moving objects in a video sequence. We assume that, multiple objects are moving rigidly on a ground plane observed by a moving camera. The video sequence is first segmented into static background and motion blobs by a homography-based motion segmentation method. Then classical \"Structure from Motion\" (SfM) techniques are applied to obtain a Euclidean reconstruction of the static background. The motion blob corresponding to each moving object is treated as if there were a static object observed by a hypothetical moving camera, called a \"virtual camera\". This virtual camera shares the same intrinsic parameters with the real camera but moves differently due to object motion. The same SfM techniques are applied to estimate the 3D shape of each moving object and the pose of the virtual camera. We show that the unknown scale of moving objects can be approximately determined by the ground plane, which is a key contribution of this paper. Another key contribution is that we prove that the 3D motion of moving objects can be solved from the virtual camera motion with a linear constraint imposed on the object translation. In our approach, a planartranslation constraint is formulated: \"the 3D instantaneous translation of moving objects must be parallel to the ground plane\". Results on real-world video sequences demonstrate the effectiveness and robustness of our approach.",
"title": ""
},
{
"docid": "4e8c0810a7869b5b4cddf27c12aea4d9",
"text": "The success of deep learning has been a catalyst to solving increasingly complex machine-learning problems, which often involve multiple data modalities. We review recent advances in deep multimodal learning and highlight the state-of the art, as well as gaps and challenges in this active research field. We first classify deep multimodal learning architectures and then discuss methods to fuse learned multimodal representations in deep-learning architectures. We highlight two areas of research–regularization strategies and methods that learn or optimize multimodal fusion structures–as exciting areas for future work.",
"title": ""
},
{
"docid": "d9b75ed31fefa68e5b43e803cafe286b",
"text": "Flavor and color of roasted peanuts are important research areas due to their significant influence on consumer preference. The aim of the present study was to explore correlations between sensory attributes of peanuts, volatile headspace compounds and color parameters. Different raw peanuts were selected to be representative of common market types, varieties, growing locations and grades used in Europe. Peanuts were roasted by a variety of processing technologies, resulting in 134 unique samples, which were analyzed for color, volatile composition and flavor profile by expert panel. Several headspace volatile compounds which positively or negatively correlated to \"roasted peanut\", \"raw bean\", \"dark roast\" and \"sweet\" attributes were identified. Results demonstrated that the correlation of CIELAB color parameters with roast related aromas, often taken for granted by the industry, is not strong when samples of different raw materials are subjected to different processing conditions.",
"title": ""
},
{
"docid": "d4cd46d9c8f0c225d4fe7e34b308e8f1",
"text": "In this paper, a 10 kW current-fed DC-DC converter using resonant push-pull topology is demonstrated and analyzed. The grounds for component dimensioning are given and the advantages and disadvantages of the resonant push-pull topology are discussed. The converter characteristics and efficiencies are demonstrated by calculations and prototype measurements.",
"title": ""
},
{
"docid": "ffc8c9a339d05c9b24d64fc52ee341ef",
"text": "This paper presents a proposed smartphone application for the unique SmartAbility Framework that supports interaction with technology for people with reduced physical ability, through focusing on the actions that they can perform independently. The Framework is a culmination of knowledge obtained through previously conducted technology feasibility trials and controlled usability evaluations involving the user community. The Framework is an example of ability-based design that focuses on the abilities of users instead of their disabilities. The paper includes a summary of Versions 1 and 2 of the Framework, including the results of a two-phased validation approach, conducted at the UK Mobility Roadshow and via a focus group of domain experts. A holistic model developed by adapting the House of Quality (HoQ) matrix of the Quality Function Deployment (QFD) approach is also described. A systematic literature review of sensor technologies built into smart devices establishes the capabilities of sensors in the Android and iOS operating systems. The review defines a set of inclusion and exclusion criteria, as well as search terms used to elicit literature from online repositories. The key contribution is the mapping of ability-based sensor technologies onto the Framework, to enable the future implementation of a smartphone application. Through the exploitation of the SmartAbility application, the Framework will increase technology amongst people with reduced physical ability and provide a promotional tool for assistive technology manufacturers.",
"title": ""
},
{
"docid": "ce636f568fc8c07b5a44190ae171c043",
"text": "Students, researchers and professional analysts lack effective tools to make personal and collective sense of problems while working in distributed teams. Central to this work is the process of sharing—and contesting—interpretations via different forms of argument. How does the “Web 2.0” paradigm challenge us to deliver useful, usable tools for online argumentation? This paper reviews the current state of the art in Web Argumentation, describes key features of the Web 2.0 orientation, and identifies some of the tensions that must be negotiated in bringing these worlds together. It then describes how these design principles are interpreted in Cohere, a web tool for social bookmarking, idea-linking, and argument visualization.",
"title": ""
},
{
"docid": "bfcef77dedf22118700737904be13c0e",
"text": "Autonomous operation is becoming an increasingly important factor for UAVs. It enables a vehicle to decide on the most appropriate action under consideration of the current vehicle and environment state. We investigated the decision-making process using the cognitive agent-based architecture Soar, which uses techniques adapted from human decision-making. Based on Soar an agent was developed which enables UAVs to autonomously make decisions and interact with a dynamic environment. One or more UAV agents were then tested in a simulation environment which has been developed using agent-based modelling. By simulating a dynamic environment, the capabilities of a UAV agent can be tested under defined conditions and additionally its behaviour can be visualised. The agent’s abilities were demonstrated using a scenario consisting of a highly dynamic border-surveillance mission with multiple autonomous UAVs. We can show that the autonomous agents are able to execute the mission successfully and can react adaptively to unforeseen events. We conclude that using a cognitive architecture is a promising approach for modelling autonomous behaviour.",
"title": ""
},
{
"docid": "4229e2db880628ea2f0922a94c30efe0",
"text": "Since the end of the 20th century, it has become clear that web browsers will play a crucial role in accessing Internet resources such as the World Wide Web. They evolved into complex software suites that are able to process a multitude of data formats. Just-In-Time (JIT) compilation was incorporated to speed up the execution of script code, but is also used besides web browsers for performance reasons. Attackers happily welcomed JIT in their own way, and until today, JIT compilers are an important target of various attacks. This includes for example JIT-Spray, JIT-based code-reuse attacks and JIT-specific flaws to circumvent mitigation techniques in order to simplify the exploitation of memory-corruption vulnerabilities. Furthermore, JIT compilers are complex and provide a large attack surface, which is visible in the steady stream of critical bugs appearing in them. In this paper, we survey and systematize the jungle of JIT compilers of major (client-side) programs, and provide a categorization of offensive techniques for abusing JIT compilation. Thereby, we present techniques used in academic as well as in non-academic works which try to break various defenses against memory-corruption vulnerabilities. Additionally, we discuss what mitigations arouse to harden JIT compilers to impede exploitation by skilled attackers wanting to abuse Just-In-Time compilers.",
"title": ""
}
] | scidocsrr |
fb3ec739ae67416aa9f0feacf4d301c9 | Computational Technique for an Efficient Classification of Protein Sequences With Distance-Based Sequence Encoding Algorithm | [
{
"docid": "c3525081c0f4eec01069dd4bd5ef12ab",
"text": "More than twelve years have elapsed since the first public release of WEKA. In that time, the software has been rewritten entirely from scratch, evolved substantially and now accompanies a text on data mining [35]. These days, WEKA enjoys widespread acceptance in both academia and business, has an active community, and has been downloaded more than 1.4 million times since being placed on Source-Forge in April 2000. This paper provides an introduction to the WEKA workbench, reviews the history of the project, and, in light of the recent 3.6 stable release, briefly discusses what has been added since the last stable version (Weka 3.4) released in 2003.",
"title": ""
}
] | [
{
"docid": "d8042183e064ffba69b54246b17b9ff4",
"text": "Offshore software development is a new trend in the information technology (IT) outsourcing field, fueled by the globalization of IT and the improvement of telecommunication facilities. Countries such as India, Ireland, and Israel have established a significant presence in this market. In this article, we discuss how software processes affect offshore development projects. We use data from projects in India, and focus on three measures of project performance: effort, elapsed time, and software rework.",
"title": ""
},
{
"docid": "69d3c943755734903b9266ca2bd2fad1",
"text": "This paper describes experiments in Machine Learning for text classification using a new representation of text based on WordNet hypernyms. Six binary classification tasks of varying diff iculty are defined, and the Ripper system is used to produce discrimination rules for each task using the new hypernym density representation. Rules are also produced with the commonly used bag-of-words representation, incorporating no knowledge from WordNet. Experiments show that for some of the more diff icult tasks the hypernym density representation leads to significantly more accurate and more comprehensible rules.",
"title": ""
},
{
"docid": "a2cf369a67507d38ac1a645e84525497",
"text": "Development of a cystic mass on the nasal dorsum is a very rare complication of aesthetic rhinoplasty. Most reported cases are of mucous cyst and entrapment of the nasal mucosa in the subcutaneous space due to traumatic surgical technique has been suggested as a presumptive pathogenesis. Here, we report a case of dorsal nasal cyst that had a different pathogenesis for cyst formation. A 58-yr-old woman developed a large cystic mass on the nasal radix 30 yr after augmentation rhinoplasty with silicone material. The mass was removed via a direct open approach and the pathology findings revealed a foreign body inclusion cyst associated with silicone. Successful nasal reconstruction was performed with autologous cartilages. Discussion and a brief review of the literature will be focused on the pathophysiology of and treatment options for a postrhinoplasty dorsal cyst.",
"title": ""
},
{
"docid": "60ac1fa826816d39562104849fff8f46",
"text": "The increased attention to environmentalism in western societies has been accompanied by a rise in ecotourism, i.e. ecologically sensitive travel to remote areas to learn about ecosystems, as well as in cultural tourism, focusing on the people who are a part of ecosystems. Increasingly, the internet has partnered with ecotourism companies to provide information about destinations and facilitate travel arrangements. This study reviews the literature linking ecotourism and sustainable development, as well as prior research showing that cultures have been historically commodified in tourism advertising for developing countries destinations. We examine seven websites advertising ecotourism and cultural tourism and conclude that: (1) advertisements for natural and cultural spaces are not always consistent with the discourse of sustainability; and (2) earlier critiques of the commodification of culture in print advertising extend to internet advertising also.",
"title": ""
},
{
"docid": "46170fe683c78a767cb15c0ac3437e83",
"text": "Recently, efforts in the development of speech recognition systems and robots have come to fruition with an overflow of applications in our daily lives. However, we are still far from achieving natural interaction between humans and robots, given that robots do not take into account the emotional state of speakers. The objective of this research is to create an automatic emotion classifier integrated with a robot, such that the robot can understand the emotional state of a human user by analyzing the speech signals from the user. This becomes particularly relevant in the realm of using assistive robotics to tailor therapeutic techniques towards assisting children with autism spectrum disorder (ASD). Over the past two decades, the number of children being diagnosed with ASD has been rapidly increasing, yet the clinical and societal support have not been enough to cope with the needs. Therefore, finding alternative, affordable, and accessible means of therapy and assistance has become more of a concern. Improving audio-based emotion prediction for children with ASD will allow for the robotic system to properly assess the engagement level of the child and modify its responses to maximize the quality of interaction between the robot and the child and sustain an interactive learning environment.",
"title": ""
},
{
"docid": "3a58c1a2e4428c0b875e1202055e5b13",
"text": "Short texts usually encounter data sparsity and ambiguity problems in representations for their lack of context. In this paper, we propose a novel method to model short texts based on semantic clustering and convolutional neural network. Particularly, we first discover semantic cliques in embedding spaces by a fast clustering algorithm. Then, multi-scale semantic units are detected under the supervision of semantic cliques, which introduce useful external knowledge for short texts. These meaningful semantic units are combined and fed into convolutional layer, followed by max-pooling operation. Experimental results on two open benchmarks validate the effectiveness of the proposed method.",
"title": ""
},
{
"docid": "918bf13ef0289eb9b78309c83e963b26",
"text": "For information retrieval, it is useful to classify documents using a hierarchy of terms from a domain. One problem is that, for many domains, hierarchies of terms are not available. The task 17 of SemEval 2015 addresses the problem of structuring a set of terms from a given domain into a taxonomy without manual intervention. Here we present some simple taxonomy structuring techniques, such as term overlap and document and sentence cooccurrence in large quantities of text (English Wikipedia) to produce hypernym pairs for the eight domain lists supplied by the task organizers. Our submission ranked first in this 2015 benchmark, which suggests that overly complicated methods might need to be adapted to individual domains. We describe our generic techniques and present an initial evaluation of results.",
"title": ""
},
{
"docid": "640fd96e02d8aa69be488323f77b40ba",
"text": "Low Power Wide Area (LPWA) connectivity, a wireless wide area technology that is characterized for interconnecting devices with low bandwidth connectivity and focusing on range and power efficiency, is seen as one of the fastest-growing components of Internet-of-Things (IoT). The LPWA connectivity is used to serve a diverse range of vertical applications, including agriculture, consumer, industrial, logistic, smart building, smart city and utilities. 3GPP has defined the maiden Narrowband IoT (NB-IoT) specification in Release 13 (Rel-13) to accommodate the LPWA demand. Several major cellular operators, such as China Mobile, Deutsch Telekom and Vodafone, have announced their NB-IoT trials or commercial network in year 2017. In Telekom Malaysia, we have setup a NB-IoT trial network for End-to-End (E2E) integration study. Our experimental assessment showed that the battery lifetime target for NB-IoT devices as stated by 3GPP utilizing latest-to-date Commercial Off-The-Shelf (COTS) NB-IoT modules is yet to be realized. Finally, several recommendations on how to optimize the battery lifetime while designing firmware for NB-IoT device are also provided.",
"title": ""
},
{
"docid": "aa3c0d7d023e1f9795df048ee44d92ec",
"text": "Correspondence Institute of Computer Science, University of Tartu, Juhan Liivi 2, 50409 Tartu, Estonia Email: [email protected] Summary Blockchain platforms, such as Ethereum, allow a set of actors to maintain a ledger of transactions without relying on a central authority and to deploy scripts, called smart contracts, that are executedwhenever certain transactions occur. These features can be used as basic building blocks for executing collaborative business processes between mutually untrusting parties. However, implementing business processes using the low-level primitives provided by blockchain platforms is cumbersome and error-prone. In contrast, established business process management systems, such as those based on the standard Business Process Model and Notation (BPMN), provide convenient abstractions for rapid development of process-oriented applications. This article demonstrates how to combine the advantages of a business process management system with those of a blockchain platform. The article introduces a blockchain-based BPMN execution engine, namely Caterpillar. Like any BPMN execution engine, Caterpillar supports the creation of instances of a process model and allows users to monitor the state of process instances and to execute tasks thereof. The specificity of Caterpillar is that the state of each process instance is maintained on the (Ethereum) blockchain and the workflow routing is performed by smart contracts generated by a BPMN-to-Solidity compiler. The Caterpillar compiler supports a large array of BPMN constructs, including subprocesses, multi-instances activities and event handlers. The paper describes the architecture of Caterpillar, and the interfaces it provides to support the monitoring of process instances, the allocation and execution of work items, and the execution of service tasks.",
"title": ""
},
{
"docid": "8e082f030aa5c5372fe327d4291f1864",
"text": "The Internet of Things (IoT) describes the interconnection of objects (or Things) for various purposes including identification, communication, sensing, and data collection. “Things” in this context range from traditional computing devices like Personal Computers (PC) to general household objects embedded with capabilities for sensing and/or communication through the use of technologies such as Radio Frequency Identification (RFID). This conceptual paper, from a philosophical viewpoint, introduces an initial set of guiding principles also referred to in the paper as commandments that can be applied by all the stakeholders involved in the IoT during its introduction, deployment and thereafter. © 2011 Published by Elsevier Ltd. Selection and/or peer-review under responsibility of [name organizer]",
"title": ""
},
{
"docid": "f376948c1b8952b0b19efad3c5ca0471",
"text": "This essay grew out of an examination of one-tailed significance testing. One-tailed tests were little advocated by the founders of modern statistics but are widely used and recommended nowadays in the biological, behavioral and social sciences. The high frequency of their use in ecology and animal behavior and their logical indefensibil-ity have been documented in a companion review paper. In the present one, we trace the roots of this problem and counter some attacks on significance testing in general. Roots include: the early but irrational dichotomization of the P scale and adoption of the 'significant/non-significant' terminology; the mistaken notion that a high P value is evidence favoring the null hypothesis over the alternative hypothesis; and confusion over the distinction between statistical and research hypotheses. Resultant widespread misuse and misinterpretation of significance tests have also led to other problems, such as unjustifiable demands that reporting of P values be disallowed or greatly reduced and that reporting of confidence intervals and standardized effect sizes be required in their place. Our analysis of these matters thus leads us to a recommendation that for standard types of significance assessment the paleoFisherian and Neyman-Pearsonian paradigms be replaced by a neoFisherian one. The essence of the latter is that a critical α (probability of type I error) is not specified, the terms 'significant' and 'non-significant' are abandoned, that high P values lead only to suspended judgments, and that the so-called \" three-valued logic \" of Cox, Kaiser, Tukey, Tryon and Harris is adopted explicitly. Confidence intervals and bands, power analyses, and severity curves remain useful adjuncts in particular situations. Analyses conducted under this paradigm we term neoFisherian significance assessments (NFSA). Their role is assessment of the existence, sign and magnitude of statistical effects. The common label of null hypothesis significance tests (NHST) is retained for paleoFisherian and Neyman-Pearsonian approaches and their hybrids. The original Neyman-Pearson framework has no utility outside quality control type applications. Some advocates of Bayesian, likelihood and information-theoretic approaches to model selection have argued that P values and NFSAs are of little or no value, but those arguments do not withstand critical review. Champions of Bayesian methods in particular continue to overstate their value and relevance. 312 Hurlbert & Lombardi • ANN. ZOOL. FeNNICI Vol. 46 \" … the object of statistical methods is the reduction of data. A quantity of data … is to be replaced by relatively few quantities which shall …",
"title": ""
},
{
"docid": "7d68eaf1d9916b0504ac13f5ff9ef980",
"text": "The success of Bitcoin largely relies on the perception of a fair underlying peer-to-peer protocol: blockchain. Fairness here essentially means that the reward (in bitcoins) given to any participant that helps maintain the consistency of the protocol by mining, is proportional to the computational power devoted by that participant to the mining task. Without such perception of fairness, honest miners might be disincentivized to maintain the protocol, leaving the space for dishonest miners to reach a majority and jeopardize the consistency of the entire system. We prove, in this paper, that blockchain is actually unfair, even in a distributed system of only two honest miners. In a realistic setting where message delivery is not instantaneous, the ratio between the (expected) number of blocks committed by two miners is at least exponential in the product of the message delay and the difference between the two miners’ hashrates. To obtain our result, we model the growth of blockchain, which may be of independent interest. We also apply our result to explain recent empirical observations and vulnerabilities.",
"title": ""
},
{
"docid": "01165a990d16000ac28b0796e462147a",
"text": "Esthesioneuroblastoma is a rare malignant tumor of sinonasal origin. These tumors typically present with unilateral nasal obstruction and epistaxis, and diagnosis is confirmed on biopsy. Over the past 15 years, significant advances have been made in endoscopic technology and techniques that have made this tumor amenable to expanded endonasal resection. There is growing evidence supporting the feasibility of safe and effective resection of esthesioneuroblastoma via an expanded endonasal approach. This article outlines a technique for endoscopic resection of esthesioneuroblastoma and reviews the current literature on esthesioneuroblastoma with emphasis on outcomes after endoscopic resection of these malignant tumors.",
"title": ""
},
{
"docid": "71bafd4946377eaabff813bffd5617d7",
"text": "Autumn-seeded winter cereals acquire tolerance to freezing temperatures and become vernalized by exposure to low temperature (LT). The level of accumulated LT tolerance depends on the cold acclimation rate and factors controlling timing of floral transition at the shoot apical meristem. In this study, genomic loci controlling the floral transition time were mapped in a winter wheat (T. aestivum L.) doubled haploid (DH) mapping population segregating for LT tolerance and rate of phenological development. The final leaf number (FLN), days to FLN, and days to anthesis were determined for 142 DH lines grown with and without vernalization in controlled environments. Analysis of trait data by composite interval mapping (CIM) identified 11 genomic regions that carried quantitative trait loci (QTLs) for the developmental traits studied. CIM analysis showed that the time for floral transition in both vernalized and non-vernalized plants was controlled by common QTL regions on chromosomes 1B, 2A, 2B, 6A and 7A. A QTL identified on chromosome 4A influenced floral transition time only in vernalized plants. Alleles of the LT-tolerant parent, Norstar, delayed floral transition at all QTLs except at the 2A locus. Some of the QTL alleles delaying floral transition also increased the length of vegetative growth and delayed flowering time. The genes underlying the QTLs identified in this study encode factors involved in regional adaptation of cold hardy winter wheat.",
"title": ""
},
{
"docid": "1865a404c970d191ed55e7509b21fb9e",
"text": "Most machine learning methods are known to capture and exploit biases of the training data. While some biases are beneficial for learning, others are harmful. Specifically, image captioning models tend to exaggerate biases present in training data (e.g., if a word is present in 60% of training sentences, it might be predicted in 70% of sentences at test time). This can lead to incorrect captions in domains where unbiased captions are desired, or required, due to over-reliance on the learned prior and image context. In this work we investigate generation of gender-specific caption words (e.g. man, woman) based on the person’s appearance or the image context. We introduce a new Equalizer model that encourages equal gender probability when gender evidence is occluded in a scene and confident predictions when gender evidence is present. The resulting model is forced to look at a person rather than use contextual cues to make a gender-specific prediction. The losses that comprise our model, the Appearance Confusion Loss and the Confident Loss, are general, and can be added to any description model in order to mitigate impacts of unwanted bias in a description dataset. Our proposed model has lower error than prior work when describing images with people and mentioning their gender and more closely matches the ground truth ratio of sentences including women to sentences including men. Finally, we show that our model more often looks at people when predicting their gender. 1",
"title": ""
},
{
"docid": "7ad00ade30fad561b4caca2fb1326ed8",
"text": "Today, digital games are available on a variety of mobile devices, such as tablet devices, portable game consoles and smart phones. Not only that, the latest mixed reality technology on mobile devices allows mobile games to integrate the real world environment into gameplay. However, little has been done to test whether the surroundings of play influence gaming experience. In this paper, we describe two studies done to test the effect of surroundings on immersion. Study One uses mixed reality games to investigate whether the integration of the real world environment reduces engagement. Whereas Study Two explored the effects of manipulating the lighting level, and therefore reducing visibility, of the surroundings. We found that immersion is reduced in the conditions where visibility of the surroundings is high. We argue that higher awareness of the surroundings has a strong impact on gaming experience.",
"title": ""
},
{
"docid": "afe1be9e13ca6e2af2c5177809e7c893",
"text": "Scar evaluation and revision techniques are chief among the most important skills in the facial plastic and reconstructive surgeon’s armamentarium. Often minimized in importance, these techniques depend as much on a thorough understanding of facial anatomy and aesthetics, advanced principles of wound healing, and an appreciation of the overshadowing psychological trauma as they do on thorough technical analysis and execution [1,2]. Scar revision is unique in the spectrum of facial plastic and reconstructive surgery because the initial traumatic event and its immediate treatment usually cannot be controlled. Patients who are candidates for scar revision procedures often present after significant loss of regional tissue, injury that crosses anatomically distinct facial aesthetic units, wound closure by personnel less experienced in plastic surgical technique, and poor post injury wound management [3,4]. While no scar can be removed completely, plastic surgeons can often improve the appearance of a scar, making it less obvious through the injection or application of certain steroid medications or through surgical procedures known as scar revisions.There are many variables affect the severity of scarring, including the size and depth of the wound, blood supply to the area, the thickness and color of your skin, and the direction of the scar [5,6].",
"title": ""
},
{
"docid": "f284c6e32679d8413e366d2daf1d4613",
"text": "Summary form only given. Existing studies on ensemble classifiers typically take a static approach in assembling individual classifiers, in which all the important features are specified in advance. In this paper, we propose a new concept, dynamic ensemble, as an advanced classifier that could have dynamic component classifiers and have dynamic configurations. Toward this goal, we have substantially expanded the existing \"overproduce and choose\" paradigm for ensemble construction. A new algorithm called BAGA is proposed to explore this approach. Taking a set of decision tree component classifiers as input, BAGA generates a set of candidate ensembles using combined bagging and genetic algorithm techniques so that component classifiers are determined at execution time. Empirical studies have been carried out on variations of the BAGA algorithm, where the sizes of chosen classifiers, effects of bag size, voting function and evaluation functions on the dynamic ensemble construction, are investigated.",
"title": ""
},
{
"docid": "8e74a27a3edea7cf0e88317851bc15eb",
"text": "Your use of the JSTOR archive indicates your acceptance of JSTOR's Terms and Conditions of Use, available at http://dv1litvip.jstor.org/page/info/about/policies/terms.jsp. JSTOR's Terms and Conditions of Use provides, in part, that unless you have obtained prior permission, you may not download an entire issue of a journal or multiple copies of articles, and you may use content in the JSTOR archive only for your personal, non-commercial use.",
"title": ""
}
] | scidocsrr |
e4ba62e072c6b93ff2d661792496595b | Game theory based mitigation of Interest flooding in Named Data Network | [
{
"docid": "e253fe7f481dc9fbd14a69e4c7d3bf23",
"text": "Current Internet is reaching the limits of its capabilities due to its function transition from host-to-host communication to content dissemination. Named Data Networking (NDN) - an instantiation of Content-Centric Networking approach, embraces this shift by stressing the content itself, rather than where it locates. NDN tries to provide better security and privacy than current Internet does, and resilience to Distributed Denial of Service (DDoS) is a significant issue. In this paper, we present a specific and concrete scenario of DDoS attack in NDN, where perpetrators make use of NDN's packet forwarding rules to send out Interest packets with spoofed names as attacking packets. Afterwards, we identify the victims of NDN DDoS attacks include both the hosts and routers. But the largest victim is not the hosts, but the routers, more specifically, the Pending Interest Table (PIT) within the router. PIT brings NDN many elegant features, but it suffers from vulnerability. We propose Interest traceback as a counter measure against the studied NDN DDoS attacks, which traces back to the originator of the attacking Interest packets. At last, we assess the harmful consequences brought by these NDN DDoS attacks and evaluate the Interest traceback counter measure. Evaluation results reveal that the Interest traceback method effectively mitigates the NDN DDoS attacks studied in this paper.",
"title": ""
}
] | [
{
"docid": "2f201cd1fe90e0cd3182c672110ce96d",
"text": "BACKGROUND\nFor many years, high dose radiation therapy was the standard treatment for patients with locally or regionally advanced non-small-cell lung cancer (NSCLC), despite a 5-year survival rate of only 3%-10% following such therapy. From May 1984 through May 1987, the Cancer and Leukemia Group B (CALGB) conducted a randomized trial that showed that induction chemotherapy before radiation therapy improved survival during the first 3 years of follow-up.\n\n\nPURPOSE\nThis report provides data for 7 years of follow-up of patients enrolled in the CALGB trial.\n\n\nMETHODS\nThe patient population consisted of individuals who had clinical or surgical stage III, histologically documented NSCLC; a CALGB performance status of 0-1; less than 5% loss of body weight in the 3 months preceding diagnosis; and radiographically visible disease. Patients were randomly assigned to receive either 1) cisplatin (100 mg/m2 body surface area intravenously on days 1 and 29) and vinblastine (5 mg/m2 body surface area intravenously weekly on days 1, 8, 15, 22, and 29) followed by radiation therapy with 6000 cGy given in 30 fractions beginning on day 50 (CT-RT group) or 2) radiation therapy with 6000 cGy alone beginning on day 1 (RT group) for a maximum duration of 6-7 weeks. Patients were evaluated for tumor regression if they had measurable or evaluable disease and were monitored for toxic effects, disease progression, and date of death.\n\n\nRESULTS\nThere were 78 eligible patients randomly assigned to the CT-RT group and 77 randomly assigned to the RT group. Both groups were similar in terms of sex, age, histologic cell type, performance status, substage of disease, and whether staging had been clinical or surgical. All patients had measurable or evaluable disease at the time of random assignment to treatment groups. Both groups received a similar quantity and quality of radiation therapy. As previously reported, the rate of tumor response, as determined radiographically, was 56% for the CT-RT group and 43% for the RT group (P = .092). After more than 7 years of follow-up, the median survival remains greater for the CT-RT group (13.7 months) than for the RT group (9.6 months) (P = .012) as ascertained by the logrank test (two-sided). The percentages of patients surviving after years 1 through 7 were 54, 26, 24, 19, 17, 13, and 13 for the CT-RT group and 40, 13, 10, 7, 6, 6, and 6 for the RT group.\n\n\nCONCLUSIONS\nLong-term follow-up confirms that patients with stage III NSCLC who receive 5 weeks of chemotherapy with cisplatin and vinblastine before radiation therapy have a 4.1-month increase in median survival. The use of sequential chemotherapy-radiotherapy increases the projected proportion of 5-year survivors by a factor of 2.8 compared with that of radiotherapy alone. However, inasmuch as 80%-85% of such patients still die within 5 years and because treatment failure occurs both in the irradiated field and at distant sites in patients receiving either sequential chemotherapy-radiotherapy or radiotherapy alone, the need for further improvements in both the local and systemic treatment of this disease persists.",
"title": ""
},
{
"docid": "60d6869cadebea71ef549bb2a7d7e5c3",
"text": "BACKGROUND\nAcne is a common condition seen in up to 80% of people between 11 and 30 years of age and in up to 5% of older adults. In some patients, it can result in permanent scars that are surprisingly difficult to treat. A relatively new treatment, termed skin needling (needle dermabrasion), seems to be appropriate for the treatment of rolling scars in acne.\n\n\nAIM\nTo confirm the usefulness of skin needling in acne scarring treatment.\n\n\nMETHODS\nThe present study was conducted from September 2007 to March 2008 at the Department of Systemic Pathology, University of Naples Federico II and the UOC Dermatology Unit, University of Rome La Sapienza. In total, 32 patients (20 female, 12 male patients; age range 17-45) with acne rolling scars were enrolled. Each patient was treated with a specific tool in two sessions. Using digital cameras, photos of all patients were taken to evaluate scar depth and, in five patients, silicone rubber was used to make a microrelief impression of the scars. The photographic data were analysed by using the sign test statistic (alpha < 0.05) and the data from the cutaneous casts were analysed by fast Fourier transformation (FFT).\n\n\nRESULTS\nAnalysis of the patient photographs, supported by the sign test and of the degree of irregularity of the surface microrelief, supported by FFT, showed that, after only two sessions, the severity grade of rolling scars in all patients was greatly reduced and there was an overall aesthetic improvement. No patient showed any visible signs of the procedure or hyperpigmentation.\n\n\nCONCLUSION\nThe present study confirms that skin needling has an immediate effect in improving acne rolling scars and has advantages over other procedures.",
"title": ""
},
{
"docid": "564c71ca08e39063f5de01fa5c8e74a3",
"text": "The Internet of Things (IoT) is a latest concept of machine-to-machine communication, that also gave birth to several information security problems. Many traditional software solutions fail to address these security issues such as trustworthiness of remote entities. Remote attestation is a technique given by Trusted Computing Group (TCG) to monitor and verify this trustworthiness. In this regard, various remote validation methods have been proposed. However, static techniques cannot provide resistance to recent attacks e.g. the latest Heartbleed bug, and the recent high profile glibc attack on Linux operating system. In this research, we have designed and implemented a lightweight Linux kernel security module for IoT devices that is scalable enough to monitor multiple applications in the kernel space. The newly built technique can measure and report multiple application’s static and dynamic behavior simultaneously. Verification of behavior of applications is performed via machine learning techniques. The result shows that deviating behavior can be detected successfully by the verifier.",
"title": ""
},
{
"docid": "51344373373bf04846ee40b049b086b9",
"text": "We present a new algorithm for real-time hand tracking on commodity depth-sensing devices. Our method does not require a user-specific calibration session, but rather learns the geometry as the user performs live in front of the camera, thus enabling seamless virtual interaction at the consumer level. The key novelty in our approach is an online optimization algorithm that jointly estimates pose and shape in each frame, and determines the uncertainty in such estimates. This knowledge allows the algorithm to integrate per-frame estimates over time, and build a personalized geometric model of the captured user. Our approach can easily be integrated in state-of-the-art continuous generative motion tracking software. We provide a detailed evaluation that shows how our approach achieves accurate motion tracking for real-time applications, while significantly simplifying the workflow of accurate hand performance capture. We also provide quantitative evaluation datasets at http://gfx.uvic.ca/datasets/handy",
"title": ""
},
{
"docid": "d67c9703ee45ad306384bbc8fe11b50e",
"text": "Approximately thirty-four percent of people who experience acute low back pain (LBP) will have recurrent episodes. It remains unclear why some people experience recurrences and others do not, but one possible cause is a loss of normal control of the back muscles. We investigated whether the control of the short and long fibres of the deep back muscles was different in people with recurrent unilateral LBP from healthy participants. Recurrent unilateral LBP patients, who were symptom free during testing, and a group of healthy volunteers, participated. Intramuscular and surface electrodes recorded the electromyographic activity (EMG) of the short and long fibres of the lumbar multifidus and the shoulder muscle, deltoid, during a postural perturbation associated with a rapid arm movement. EMG onsets of the short and long fibres, relative to that of deltoid, were compared between groups, muscles, and sides. In association with a postural perturbation, short fibre EMG onset occurred later in participants with recurrent unilateral LBP than in healthy participants (p=0.022). The short fibres were active earlier than long fibres on both sides in the healthy participants (p<0.001) and on the non-painful side in the LBP group (p=0.045), but not on the previously painful side in the LBP group. Activity of deep back muscles is different in people with a recurrent unilateral LBP, despite the resolution of symptoms. Because deep back muscle activity is critical for normal spinal control, the current results provide the first evidence of a candidate mechanism for recurrent episodes.",
"title": ""
},
{
"docid": "efc82cbdc904f03a93fd6797024bf3cf",
"text": "We propose a novel extension of the encoder-decoder framework, called a review network. The review network is generic and can enhance any existing encoderdecoder model: in this paper, we consider RNN decoders with both CNN and RNN encoders. The review network performs a number of review steps with attention mechanism on the encoder hidden states, and outputs a thought vector after each review step; the thought vectors are used as the input of the attention mechanism in the decoder. We show that conventional encoder-decoders are a special case of our framework. Empirically, we show that our framework improves over state-ofthe-art encoder-decoder systems on the tasks of image captioning and source code captioning.1",
"title": ""
},
{
"docid": "5fcb9873afd16e6705ab77d7e59aa453",
"text": "Charging PEVs (Plug-In Electric Vehicles) at public fast charging station can improve the public acceptance and increase their penetration level by solving problems related to vehicles' battery. However, the price for the impact of fast charging stations on the distribution grid has to be dealt with. The main purpose of this paper is to investigate the impacts of fast charging stations on a distribution grid using a stochastic fast charging model and to present the charging model with some of its results. The model is used to investigate the impacts on distribution transformer loading and system bus voltage profiles of the test distribution grid. Stochastic and deterministic modelling approaches are also compared. It is concluded that fast charging stations affect transformer loading and system bus voltage profiles. Hence, necessary measures such as using local energy storage and voltage conditioning devices, such as SVC (Static Var Compensator), have to be used at the charging station to handle the problems. It is also illustrated that stochastic modelling approach can produce a more sound and realistic results than deterministic approach.",
"title": ""
},
{
"docid": "107436d5f38f3046ef28495a14cc5caf",
"text": "There is a universal standard for facial beauty regardless of race, age, sex and other variables. Beautiful faces have ideal facial proportion. Ideal proportion is directly related to divine proportion, and that proportion is 1 to 1.618. All living organisms, including humans, are genetically encoded to develop to this proportion because there are extreme esthetic and physiologic benefits. The vast majority of us are not perfectly proportioned because of environmental factors. Establishment of a universal standard for facial beauty will significantly simplify the diagnosis and treatment of facial disharmonies and abnormalities. More important, treating to this standard will maximize facial esthetics, TMJ health, psychologic and physiologic health, fertility, and quality of life.",
"title": ""
},
{
"docid": "b88a79221efb5afc717cb2f97761271d",
"text": "BACKGROUND\nLymphangitic streaking, characterized by linear erythema on the skin, is most commonly observed in the setting of bacterial infection. However, a number of nonbacterial causes can result in lymphangitic streaking. We sought to elucidate the nonbacterial causes of lymphangitic streaking that may mimic bacterial infection to broaden clinicians' differential diagnosis for patients presenting with lymphangitic streaking.\n\n\nMETHODS\nWe performed a review of the literature, including all available reports pertaining to nonbacterial causes of lymphangitic streaking.\n\n\nRESULTS\nVarious nonbacterial causes can result in lymphangitic streaking, including viral and fungal infections, insect or spider bites, and iatrogenic etiologies.\n\n\nCONCLUSION\nAwareness of potential nonbacterial causes of superficial lymphangitis is important to avoid misdiagnosis and delay the administration of appropriate care.",
"title": ""
},
{
"docid": "3269b3574b19a976de305c99f9529fcd",
"text": "The objective of this master thesis is to identify \" key-drivers \" embedded in customer satisfaction data. The data was collected by a large transportation sector corporation during five years and in four different countries. The questionnaire involved several different sections of questions and ranged from demographical information to satisfaction attributes with the vehicle, dealer and several problem areas. Various regression, correlation and cooperative game theory approaches were used to identify the key satisfiers and dissatisfiers. The theoretical and practical advantages of using the Shapley value, Canonical Correlation Analysis and Hierarchical Logistic Regression has been demonstrated and applied to market research. ii iii Acknowledgements",
"title": ""
},
{
"docid": "18883fdb506d235fdf72b46e76923e41",
"text": "The Ponseti method for the management of idiopathic clubfoot has recently experienced a rise in popularity, with several centers reporting excellent outcomes. The challenge in achieving a successful outcome with this method lies not in correcting deformity but in preventing relapse. The most common cause of relapse is failure to adhere to the prescribed postcorrective bracing regimen. Socioeconomic status, cultural factors, and physician-parent communication may influence parental compliance with bracing. New, more user-friendly braces have been introduced in the hope of improving the rate of compliance. Strategies that may be helpful in promoting adherence include educating the family at the outset about the importance of bracing, encouraging calls and visits to discuss problems, providing clear written instructions, avoiding or promptly addressing skin problems, and refraining from criticism of the family when noncompliance is evident. A strong physician-family partnership and consideration of underlying cognitive, socioeconomic, and cultural issues may lead to improved adherence to postcorrective bracing protocols and better patient outcomes.",
"title": ""
},
{
"docid": "3021929187465029b9761aeb3eb20580",
"text": "We show that a deep convolutional network with an architecture inspired by the models used in image recognition can yield accuracy similar to a long-short term memory (LSTM) network, which achieves the state-of-the-art performance on the standard Switchboard automatic speech recognition task. Moreover, we demonstrate that merging the knowledge in the CNN and LSTM models via model compression further improves the accuracy of the convolutional model.",
"title": ""
},
{
"docid": "45c006e52bdb9cfa73fd4c0ebf692dfe",
"text": "Main memory capacities have grown up to a point where most databases fit into RAM. For main-memory database systems, index structure performance is a critical bottleneck. Traditional in-memory data structures like balanced binary search trees are not efficient on modern hardware, because they do not optimally utilize on-CPU caches. Hash tables, also often used for main-memory indexes, are fast but only support point queries. To overcome these shortcomings, we present ART, an adaptive radix tree (trie) for efficient indexing in main memory. Its lookup performance surpasses highly tuned, read-only search trees, while supporting very efficient insertions and deletions as well. At the same time, ART is very space efficient and solves the problem of excessive worst-case space consumption, which plagues most radix trees, by adaptively choosing compact and efficient data structures for internal nodes. Even though ART's performance is comparable to hash tables, it maintains the data in sorted order, which enables additional operations like range scan and prefix lookup.",
"title": ""
},
{
"docid": "11c106ac9e7002d138af49f1bf303c88",
"text": "The main purpose of Feature Subset Selection is to find a reduced subset of attributes from a data set described by a feature set. The task of a feature selection algorithm (FSA) is to provide with a computational solution motivated by a certain definition of relevance or by a reliable evaluation measure. In this paper several fundamental algorithms are studied to assess their performance in a controlled experimental scenario. A measure to evaluate FSAs is devised that computes the degree of matching between the output given by a FSA and the known optimal solutions. An extensive experimental study on synthetic problems is carried out to assess the behaviour of the algorithms in terms of solution accuracy and size as a function of the relevance, irrelevance, redundancy and size of the data samples. The controlled experimental conditions facilitate the derivation of better-supported and meaningful conclusions.",
"title": ""
},
{
"docid": "f8093849e9157475149d00782c60ae60",
"text": "Social media use, potential and challenges in innovation have received little attention in literature, especially from the standpoint of the business-to-business sector. Therefore, this paper focuses on bridging this gap with a survey of social media use, potential and challenges, combined with a social media - focused innovation literature review of state-of-the-art. The study also studies the essential differences between business-to-consumer and business-to-business in the above respects. The paper starts by defining of social media and web 2.0, and then characterizes social media in business, social media in business-to-business sector and social media in business-to-business innovation. Finally we present and analyze the results of our empirical survey of 122 Finnish companies. This paper suggests that there is a significant gap between perceived potential of social media and social media use in innovation activity in business-to-business companies, recognizes potentially effective ways to reduce the gap, and clarifies the found differences between B2B's and B2C's.",
"title": ""
},
{
"docid": "79fd1db13ce875945c7e11247eb139c8",
"text": "This paper provides a comprehensive review of outcome studies and meta-analyses of effectiveness studies of psychodynamic therapy (PDT) for the major categories of mental disorders. Comparisons with inactive controls (waitlist, treatment as usual and placebo) generally but by no means invariably show PDT to be effective for depression, some anxiety disorders, eating disorders and somatic disorders. There is little evidence to support its implementation for post-traumatic stress disorder, obsessive-compulsive disorder, bulimia nervosa, cocaine dependence or psychosis. The strongest current evidence base supports relatively long-term psychodynamic treatment of some personality disorders, particularly borderline personality disorder. Comparisons with active treatments rarely identify PDT as superior to control interventions and studies are generally not appropriately designed to provide tests of statistical equivalence. Studies that demonstrate inferiority of PDT to alternatives exist, but are small in number and often questionable in design. Reviews of the field appear to be subject to allegiance effects. The present review recommends abandoning the inherently conservative strategy of comparing heterogeneous \"families\" of therapies for heterogeneous diagnostic groups. Instead, it advocates using the opportunities provided by bioscience and computational psychiatry to creatively explore and assess the value of protocol-directed combinations of specific treatment components to address the key problems of individual patients.",
"title": ""
},
{
"docid": "6902e1604957fa21adbe90674bf5488d",
"text": "State-of-the-art distributed RDF systems partition data across multiple computer nodes (workers). Some systems perform cheap hash partitioning, which may result in expensive query evaluation. Others try to minimize inter-node communication, which requires an expensive data preprocessing phase, leading to a high startup cost. Apriori knowledge of the query workload has also been used to create partitions, which, however, are static and do not adapt to workload changes. In this paper, we propose AdPart, a distributed RDF system, which addresses the shortcomings of previous work. First, AdPart applies lightweight partitioning on the initial data, which distributes triples by hashing on their subjects; this renders its startup overhead low. At the same time, the locality-aware query optimizer of AdPart takes full advantage of the partitioning to (1) support the fully parallel processing of join patterns on subjects and (2) minimize data communication for general queries by applying hash distribution of intermediate results instead of broadcasting, wherever possible. Second, AdPart monitors the data access patterns and dynamically redistributes and replicates the instances of the most frequent ones among workers. As a result, the communication cost for future queries is drastically reduced or even eliminated. To control replication, AdPart implements an eviction policy for the redistributed patterns. Our experiments with synthetic and real data verify that AdPart: (1) starts faster than all existing systems; (2) processes thousands of queries before other systems become online; and (3) gracefully adapts to the query load, being able to evaluate queries on billion-scale RDF data in subseconds.",
"title": ""
},
{
"docid": "f3467adcca693e015c9dcc85db04d492",
"text": "For urban driving, knowledge of ego-vehicle’s position is a critical piece of information that enables advanced driver-assistance systems or self-driving cars to execute safety-related, autonomous driving maneuvers. This is because, without knowing the current location, it is very hard to autonomously execute any driving maneuvers for the future. The existing solutions for localization rely on a combination of a Global Navigation Satellite System, an inertial measurement unit, and a digital map. However, in urban driving environments, due to poor satellite geometry and disruption of radio signal reception, their longitudinal and lateral errors are too significant to be used for an autonomous system. To enhance the existing system’s localization capability, this work presents an effort to develop a vision-based lateral localization algorithm. The algorithm aims at reliably counting, with or without observations of lane-markings, the number of road-lanes and identifying the index of the road-lane on the roadway upon which our vehicle happens to be driving. Tests of the proposed algorithms against intercity and interstate highway videos showed promising results in terms of counting the number of road-lanes and the indices of the current road-lanes. C © 2015 Wiley Periodicals, Inc.",
"title": ""
},
{
"docid": "5536f306c3633874299be57a19e35c01",
"text": "0957-4174/$ see front matter 2013 Elsevier Ltd. All rights reserved. http://dx.doi.org/10.1016/j.eswa.2013.04.023 ⇑ Corresponding author. Tel.: +55 8197885665. E-mail addresses: [email protected] (Rafael Ferreira), [email protected] (L. de Souza Cabral), [email protected] (R.D. Lins), [email protected] (G. Pereira e Silva), [email protected] (F. Freitas), [email protected] (G.D.C. Cavalcanti), rjlima01@gmail. com (R. Lima), [email protected] (S.J. Simske), [email protected] (L. Favaro). Rafael Ferreira a,⇑, Luciano de Souza Cabral , Rafael Dueire Lins , Gabriel Pereira e Silva , Fred Freitas , George D.C. Cavalcanti , Rinaldo Lima , Steven J. Simske , Luciano Favaro c",
"title": ""
}
] | scidocsrr |
841ead8607dd8724013c08b638834473 | Scalable and Lightweight CTF Infrastructures Using Application Containers | [
{
"docid": "bc6a13cc44a77d29360d04a2bc96bd61",
"text": "Security competitions have become a popular way to foster security education by creating a competitive environment in which participants go beyond the effort usually required in traditional security courses. Live security competitions (also called “Capture The Flag,” or CTF competitions) are particularly well-suited to support handson experience, as they usually have both an attack and a defense component. Unfortunately, because these competitions put several (possibly many) teams against one another, they are difficult to design, implement, and run. This paper presents a framework that is based on the lessons learned in running, for more than 10 years, the largest educational CTF in the world, called iCTF. The framework’s goal is to provide educational institutions and other organizations with the ability to run customizable CTF competitions. The framework is open and leverages the security community for the creation of a corpus of educational security challenges.",
"title": ""
}
] | [
{
"docid": "db72513dd3d75f63d351a93fcb53cc46",
"text": "The occurrence of visually induced motion sickness has been frequently linked to the sensation of illusory self-motion (vection), however, the precise nature of this relationship is still not fully understood. To date, it is still a matter of debate as to whether vection is a necessary prerequisite for visually induced motion sickness (VIMS). That is, can there be VIMS without any sensation of self-motion? In this paper, we will describe the possible nature of this relationship, review the literature that addresses this relationship (including theoretical accounts of vection and VIMS), and offer suggestions with respect to operationally defining and reporting these phenomena in future.",
"title": ""
},
{
"docid": "2e9786cfe8e7a759ed1e1481d59624ba",
"text": "Global path planning for mobile robot using genetic algorithm and A* algorithm is investigated in this paper. The proposed algorithm includes three steps: the MAKLINK graph theory is adopted to establish the free space model of mobile robots firstly, then Dijkstra algorithm is utilized for finding a feasible collision-free path, finally the global optimal path of mobile robots is obtained based on the hybrid algorithm of A* algorithm and genetic algorithm. Experimental results indicate that the proposed algorithm has better performance than Dijkstra algorithm in term of both solution quality and computational time, and thus it is a viable approach to mobile robot global path planning.",
"title": ""
},
{
"docid": "b7a04d56d6d06a0d89f6113c3ab639a8",
"text": "Poker is an interesting test-bed for artificial intelligence research. It is a game of imperfect knowledge, where multiple competing agents must deal with risk management, agent modeling, unreliable information and deception, much like decision-making applications in the real world. Agent modeling is one of the most difficult problems in decision-making applications and in poker it is essential to achieving high performance. This paper describes and evaluates Loki, a poker program capable of observing its opponents, constructing opponent models and dynamically adapting its play to best exploit patterns in the opponents’ play.",
"title": ""
},
{
"docid": "370c728b64c8cf6c63815729f4f9b03e",
"text": "Previous researchers studying baseball pitching have compared kinematic and kinetic parameters among different types of pitches, focusing on the trunk, shoulder, and elbow. The lack of data on the wrist and forearm limits the understanding of clinicians, coaches, and researchers regarding the mechanics of baseball pitching and the differences among types of pitches. The purpose of this study was to expand existing knowledge of baseball pitching by quantifying and comparing kinematic data of the wrist and forearm for the fastball (FA), curveball (CU) and change-up (CH) pitches. Kinematic and temporal parameters were determined from 8 collegiate pitchers recorded with a four-camera system (200 Hz). Although significant differences were observed for all pitch comparisons, the least number of differences occurred between the FA and CH. During arm cocking, peak wrist extension for the FA and CH pitches was greater than for the CU, while forearm supination was greater for the CU. In contrast to the current study, previous comparisons of kinematic data for trunk, shoulder, and elbow revealed similarities between the FA and CU pitches and differences between the FA and CH pitches. Kinematic differences among pitches depend on the segment of the body studied.",
"title": ""
},
{
"docid": "d1e96872bb61cc16597827ec11f6bb4f",
"text": "Audit regulators and the auditing profession have responded to this expectation by issuing a number of standards outlining auditors’ responsibilities to detect fraud (e.g., PCAOB 2010; IAASB 2009, PCAOB 2002; AICPA 2002; AICPA 1997; AICPA 1988). These standards indicate that auditors are responsible for providing reasonable assurance that audited financial statements are free of material misstatements due to fraud. Nonetheless, prior research indicates that auditors detect relatively few significant frauds (Dyck et al. 2010, KPMG 2009). This finding raises the obvious question: Why do auditors rarely detect fraud?",
"title": ""
},
{
"docid": "56c41892216823b592bcafbe00508a67",
"text": "Nowadays, universities offer most of their services using corporate website. In higher education services including admission services, a university needs to always provide excellent service to ensure student candidate satisfaction. To obtain student candidate satisfaction apart from the quality of education must also be accompanied by providing consultation services and information to them. This paper proposes the development of Chatbot which acts as a conversation agent that can play a role of as student candidate service. This Chatbot is called Dinus Intelligent Assistance (DINA). DINA uses knowledge based as a center for machine learning approach. The pattern extracted from the knowledge based can be used to provide responses to the user. The source of knowledge based is taken from Universitas Dian Nuswantoro (UDINUS) guest book. It contains of questions and answers about UDINUS admission services. Testing of this system is done by entering questions. From 166 intents, the author tested it using ten random sample questions. Among them, it got eight tested questions answered correctly. Therefore, by using this study we can develop further intelligent Chatbots to help student candidates find the information they need without waiting for the admission staffs's answer.",
"title": ""
},
{
"docid": "e8f28a4e17650041350e535c1ac792ff",
"text": "A compact multiple-input-multiple-output (MIMO) antenna with a small size of 26×40 mm2 is proposed for portable ultrawideband (UWB) applications. The antenna consists of two planar-monopole (PM) antenna elements with microstrip-fed printed on one side of the substrate and placed perpendicularly to each other to achieve good isolation. To enhance isolation and increase impedance bandwidth, two long protruding ground stubs are added to the ground plane on the other side and a short ground strip is used to connect the ground planes of the two PMs together to form a common ground. Simulation and measurement are used to study the antenna performance in terms of reflection coefficients at the two input ports, coupling between the two input ports, radiation pattern, realized peak gain, efficiency and envelope correlation coefficient for pattern diversity. Results show that the MIMO antenna has an impedance bandwidth of larger than 3.1-10.6 GHz, low mutual coupling of less than -15 dB, and a low envelope correlation coefficient of less than 0.2 across the frequency band, making it a good candidate for portable UWB applications.",
"title": ""
},
{
"docid": "dae567414224b24dbb7bc06b9b9ea57f",
"text": "With the increasing computational power of computers, software design systems are progressing from being tools enabling architects and designers to express their ideas, to tools capable of creating designs under human guidance. One of the main limitations for these computer-automated design systems is the representation with which they encode designs. If the representation cannot encode a certain design, then the design system cannot produce it. To be able to produce new types of designs, and not just optimize pre-defined parameterizations, evolutionary design systems must use generative representations. Generative representations are assembly procedures, or algorithms, for constructing a design, thereby allowing for truly novel design solutions to be encoded. In addition, by enabling modularity, regularity and hierarchy, the level of sophistication that can be evolved is increased. We demonstrate the advantages of generative representations on two different design domains: the evolution of spacecraft antennas and the evolution of 3D solid objects.",
"title": ""
},
{
"docid": "4e8365fbc07d7d8bc55b18d52abec38a",
"text": "Depression's influence on mother-infant interactious at 2 months postpartum was studied in 24 depressed and 22 nondepressed mothex-infant dyads. Depression was diagnosed using the SADS-L and RDC. In S's homes, structured interactions of 3 min duration were videotaped and later coded using behavioral descriptors and a l-s time base. Unstructured interactions were described using rating scales. During structured interactions, depressed mothers were more negative and their babies were less positive than were nondepressed dyads. The reduced positivity of depressed dyads was achieved through contingent resixmfiveness. Ratings from unstructured interactions were consistent with these findings. Results support the hypothesis that depression negatively influences motherinfant behaviol; but indicate that influence may vary with development, chronicity, and presence of other risk factors.",
"title": ""
},
{
"docid": "06a3bf091404fc51bb3ee0a9f1d8a759",
"text": "A compact design of a circularly-polarized microstrip antenna in order to achieve dual-band behavior for Radio Frequency Identification (RFID) applications is presented, defected ground structure (DGS) technique is used to miniaturize and get a dual-band antenna, the entire size is 38×40×1.58 mm3. This antenna was designed to cover both ultra-height frequency (740MHz ~ 1GHz) and slow height frequency (2.35 GHz ~ 2.51GHz), return loss <; -10 dB, the 3-dB axial ratio bandwidths are about 110 MHz at the lower band (900 MHz).",
"title": ""
},
{
"docid": "d103d7793a9ff39c43dce47d45742905",
"text": "This paper proposes an architecture for an open-domain conversational system and evaluates an implemented system. The proposed architecture is fully composed of modules based on natural language processing techniques. Experimental results using human subjects show that our architecture achieves significantly better naturalness than a retrieval-based baseline and that its naturalness is close to that of a rule-based system using 149K hand-crafted rules.",
"title": ""
},
{
"docid": "cdad4ee7017fb232425aceff8b50dca4",
"text": "At the core of interpretable machine learning is the question of whether humans are able to make accurate predictions about a model’s behavior. Assumed in this question are three properties of the interpretable output: coverage, precision, and effort. Coverage refers to how often humans think they can predict the model’s behavior, precision to how accurate humans are in those predictions, and effort is either the up-front effort required in interpreting the model, or the effort required to make predictions about a model’s behavior.",
"title": ""
},
{
"docid": "6c92ed5a38cc4ba5b7fe644cd086ca48",
"text": "BACKGROUND\nOsteoarthritis (OA), a chronic degenerative disease of synovial joints is characterised by pain and stiffness. Aim of treatment is pain relief. Complementary and alternative medicine (CAM) refers to practices which are not an integral part of orthodox medicine.\n\n\nAIMS AND OBJECTIVES\nTo determine the pattern of usage of CAM among OA patients in Nigeria.\n\n\nPATIENTS AND METHODS\nConsecutive patients with OA attending orthopaedic clinic of Havana Specialist Hospital, Lagos, Nigeria were interviewed over a 6- month period st st of 1 May to 31 October 2007 on usage of CAM. Structured and open-ended questions were used. Demographic data, duration of OA and treatment as well as compliance to orthodox medications were documented.\n\n\nRESULTS\nOne hundred and sixty four patients were studied.120 (73.25%) were females and 44(26.89%) were males. Respondents age range between 35-74 years. 66(40.2%) patients used CAM. 35(53.0%) had done so before presenting to the hospital. The most commonly used CAM were herbal products used by 50(75.8%) of CAM users. Among herbal product users, 74.0% used non- specific local products, 30.0% used ginger, 36.0% used garlic and 28.0% used Aloe Vera. Among CAM users, 35(53.0%) used local embrocation and massage, 10(15.2%) used spiritual methods. There was no significant difference in demographics, clinical characteristics and pain control among CAM users and non-users.\n\n\nCONCLUSION\nMany OA patients receiving orthodox therapy also use CAM. Medical doctors need to keep a wary eye on CAM usage among patients and enquire about this health-seeking behaviour in order to educate them on possible drug interactions, adverse effects and long term complications.",
"title": ""
},
{
"docid": "c4ecb79dc2185fe0f7f422a092bc1334",
"text": "The set of minutia points is considered to be the most distinctive feature for fingerprint representation and is widely used in fingerprint matching. It was believed that the minutiae set does not contain sufficient information to reconstruct the original fingerprint image from which minutiae were extracted. However, recent studies have shown that it is indeed possible to reconstruct fingerprint images from their minutiae representations. Reconstruction techniques demonstrate the need for securing fingerprint templates, improving the template interoperability, and improving fingerprint synthesis. But, there is still a large gap between the matching performance obtained from original fingerprint images and their corresponding reconstructed fingerprint images. In this paper, the prior knowledge about fingerprint ridge structures is encoded in terms of orientation patch and continuous phase patch dictionaries to improve the fingerprint reconstruction. The orientation patch dictionary is used to reconstruct the orientation field from minutiae, while the continuous phase patch dictionary is used to reconstruct the ridge pattern. Experimental results on three public domain databases (FVC2002 DB1_A, FVC2002 DB2_A, and NIST SD4) demonstrate that the proposed reconstruction algorithm outperforms the state-of-the-art reconstruction algorithms in terms of both: 1) spurious minutiae and 2) matching performance with respect to type-I attack (matching the reconstructed fingerprint against the same impression from which minutiae set was extracted) and type-II attack (matching the reconstructed fingerprint against a different impression of the same finger).",
"title": ""
},
{
"docid": "f2f95f70783be5d5ee1260a3c5b9d892",
"text": "Information Extraction is the process of automatically obtaining knowledge from plain text. Because of the ambiguity of written natural language, Information Extraction is a difficult task. Ontology-based Information Extraction (OBIE) reduces this complexity by including contextual information in the form of a domain ontology. The ontology provides guidance to the extraction process by providing concepts and relationships about the domain. However, OBIE systems have not been widely adopted because of the difficulties in deployment and maintenance. The Ontology-based Components for Information Extraction (OBCIE) architecture has been proposed as a form to encourage the adoption of OBIE by promoting reusability through modularity. In this paper, we propose two orthogonal extensions to OBCIE that allow the construction of hybrid OBIE systems with higher extraction accuracy and a new functionality. The first extension utilizes OBCIE modularity to integrate different types of implementation into one extraction system, producing a more accurate extraction. For each concept or relationship in the ontology, we can select the best implementation for extraction, or we can combine both implementations under an ensemble learning schema. The second extension is a novel ontology-based error detection mechanism. Following a heuristic approach, we can identify sentences that are logically inconsistent with the domain ontology. Because the implementation strategy for the extraction of a concept is independent of the functionality of the extraction, we can design a hybrid OBIE system with concepts utilizing different implementation strategies for extracting correct or incorrect sentences. Our evaluation shows that, in the implementation extension, our proposed method is more accurate in terms of correctness and completeness of the extraction. Moreover, our error detection method can identify incorrect statements with a high accuracy.",
"title": ""
},
{
"docid": "c699ce2a06276f722bf91806378b11eb",
"text": "The success of deep neural networks (DNNs) is heavily dependent on the availability of labeled data. However, obtaining labeled data is a big challenge in many real-world problems. In such scenarios, a DNN model can leverage labeled and unlabeled data from a related domain, but it has to deal with the shift in data distributions between the source and the target domains. In this paper, we study the problem of classifying social media posts during a crisis event (e.g., Earthquake). For that, we use labeled and unlabeled data from past similar events (e.g., Flood) and unlabeled data for the current event. We propose a novel model that performs adversarial learning based domain adaptation to deal with distribution drifts and graph based semi-supervised learning to leverage unlabeled data within a single unified deep learning framework. Our experiments with two real-world crisis datasets collected from Twitter demonstrate significant improvements over several baselines.",
"title": ""
},
{
"docid": "c51acd24cb864b050432a055fef2de9a",
"text": "Electric motor and power electronics-based inverter are the major components in industrial and automotive electric drives. In this paper, we present a model-based fault diagnostics system developed using a machine learning technology for detecting and locating multiple classes of faults in an electric drive. Power electronics inverter can be considered to be the weakest link in such a system from hardware failure point of view; hence, this work is focused on detecting faults and finding which switches in the inverter cause the faults. A simulation model has been developed based on the theoretical foundations of electric drives to simulate the normal condition, all single-switch and post-short-circuit faults. A machine learning algorithm has been developed to automatically select a set of representative operating points in the (torque, speed) domain, which in turn is sent to the simulated electric drive model to generate signals for the training of a diagnostic neural network, fault diagnostic neural network (FDNN). We validated the capability of the FDNN on data generated by an experimental bench setup. Our research demonstrates that with a robust machine learning approach, a diagnostic system can be trained based on a simulated electric drive model, which can lead to a correct classification of faults over a wide operating domain.",
"title": ""
},
{
"docid": "c30f721224317a41c1e316c158549d81",
"text": "The oxysterol receptor LXR is a key transcriptional regulator of lipid metabolism. LXR increases expression of SREBP-1, which in turn regulates at least 32 genes involved in lipid synthesis and transport. We recently identified 25-hydroxycholesterol-3-sulfate (25HC3S) as an important regulatory molecule in the liver. We have now studied the effects of 25HC3S and its precursor, 25-hydroxycholesterol (25HC), on lipid metabolism as mediated by the LXR/SREBP-1 signaling in macrophages. Addition of 25HC3S to human THP-1-derived macrophages markedly decreased nuclear LXR protein levels. 25HC3S administration was followed by dose- and time-dependent decreases in SREBP-1 mature protein and mRNA levels. 25HC3S decreased the expression of SREBP-1-responsive genes, acetyl-CoA carboxylase-1, and fatty acid synthase (FAS) as well as HMGR and LDLR, which are key proteins involved in lipid metabolism. Subsequently, 25HC3S decreased intracellular lipids and increased cell proliferation. In contrast to 25HC3S, 25HC acted as an LXR ligand, increasing ABCA1, ABCG1, SREBP-1, and FAS mRNA levels. In the presence of 25HC3S, 25HC, and LXR agonist T0901317, stimulation of LXR targeting gene expression was repressed. We conclude that 25HC3S acts in macrophages as a cholesterol satiety signal, downregulating cholesterol and fatty acid synthetic pathways via inhibition of LXR/SREBP signaling. A possible role of oxysterol sulfation is proposed.",
"title": ""
},
{
"docid": "841a4a9f1a43b06064ccb769f29c2fe4",
"text": "A simple way to mitigate the potential negative side-effects associated with chemical lysis of a blood clot is to tear its fibrin network via mechanical rubbing using a helical robot. Here, we achieve mechanical rubbing of blood clots under ultrasound guidance and using external magnetic actuation. Position of the helical robot is determined using ultrasound feedback and used to control its motion toward the clot, whereas the volume of the clots is estimated simultaneously using visual feedback. We characterize the shear modulus and ultimate shear strength of the blood clots to predict their removal rate during rubbing. Our <italic>in vitro</italic> experiments show the ability to move the helical robot controllably toward clots using ultrasound feedback with average and maximum errors of <inline-formula> <tex-math notation=\"LaTeX\">${\\text{0.84}\\pm \\text{0.41}}$</tex-math></inline-formula> and 2.15 mm, respectively, and achieve removal rate of <inline-formula><tex-math notation=\"LaTeX\">$-\\text{0.614} \\pm \\text{0.303}$</tex-math> </inline-formula> mm<inline-formula><tex-math notation=\"LaTeX\">$^{3}$</tex-math></inline-formula>/min at room temperature (<inline-formula><tex-math notation=\"LaTeX\">${\\text{25}}^{\\circ }$</tex-math></inline-formula>C) and <inline-formula><tex-math notation=\"LaTeX\">$-\\text{0.482} \\pm \\text{0.23}$</tex-math></inline-formula> mm <inline-formula><tex-math notation=\"LaTeX\">$^{3}$</tex-math></inline-formula>/min at body temperature (37 <inline-formula><tex-math notation=\"LaTeX\">$^{\\circ}$</tex-math></inline-formula>C), under the influence of two rotating dipole fields at frequency of 35 Hz. We also validate the effectiveness of mechanical rubbing by measuring the number of red blood cells and platelets past the clot. Our measurements show that rubbing achieves cell count of <inline-formula><tex-math notation=\"LaTeX\">$(\\text{46} \\pm \\text{10.9}) \\times \\text{10}^{4}$</tex-math> </inline-formula> cell/ml, whereas the count in the absence of rubbing is <inline-formula><tex-math notation=\"LaTeX\"> $(\\text{2} \\pm \\text{1.41}) \\times \\text{10}^{4}$</tex-math></inline-formula> cell/ml, after 40 min.",
"title": ""
},
{
"docid": "790a310f599ff9475cc5a66c0e1ca291",
"text": "In the past 20 years, there has been a great advancement in knowledge pertaining to compliance with amblyopia treatments. The occlusion dose monitor introduced quantitative monitoring methods in patching, which sparked our initial understanding of the dose-response relationship for patching amblyopia treatment. This review focuses on current compliance knowledge and the impact it has on patching and atropine amblyopia treatment.",
"title": ""
}
] | scidocsrr |
a6bed6910aac2ca61a0877886423bd01 | Structured Sequence Modeling with Graph Convolutional Recurrent Networks | [
{
"docid": "8d83568ca0c89b1a6e344341bb92c2d0",
"text": "Many underlying relationships among data in several areas of science and engineering, e.g., computer vision, molecular chemistry, molecular biology, pattern recognition, and data mining, can be represented in terms of graphs. In this paper, we propose a new neural network model, called graph neural network (GNN) model, that extends existing neural network methods for processing the data represented in graph domains. This GNN model, which can directly process most of the practically useful types of graphs, e.g., acyclic, cyclic, directed, and undirected, implements a function tau(G,n) isin IRm that maps a graph G and one of its nodes n into an m-dimensional Euclidean space. A supervised learning algorithm is derived to estimate the parameters of the proposed GNN model. The computational cost of the proposed algorithm is also considered. Some experimental results are shown to validate the proposed learning algorithm, and to demonstrate its generalization capabilities.",
"title": ""
}
] | [
{
"docid": "73270e8140d763510d97f7bd2fdd969e",
"text": "Inspired by the progress of deep neural network (DNN) in single-media retrieval, the researchers have applied the DNN to cross-media retrieval. These methods are mainly two-stage learning: the first stage is to generate the separate representation for each media type, and the existing methods only model the intra-media information but ignore the inter-media correlation with the rich complementary context to the intra-media information. The second stage is to get the shared representation by learning the cross-media correlation, and the existing methods learn the shared representation through a shallow network structure, which cannot fully capture the complex cross-media correlation. For addressing the above problems, we propose the cross-media multiple deep network (CMDN) to exploit the complex cross-media correlation by hierarchical learning. In the first stage, CMDN jointly models the intra-media and intermedia information for getting the complementary separate representation of each media type. In the second stage, CMDN hierarchically combines the inter-media and intra-media representations to further learn the rich cross-media correlation by a deeper two-level network strategy, and finally get the shared representation by a stacked network style. Experiment results show that CMDN achieves better performance comparing with several state-of-the-art methods on 3 extensively used cross-media datasets.",
"title": ""
},
{
"docid": "32d235c450be47d9f5bca03cb3d40f82",
"text": "Recent empirical results on long-term dependency tasks have shown that neural networks augmented with an external memory can learn the long-term dependency tasks more easily and achieve better generalization than vanilla recurrent neural networks (RNN). We suggest that memory augmented neural networks can reduce the effects of vanishing gradients by creating shortcut (or wormhole) connections. Based on this observation, we propose a novel memory augmented neural network model called TARDIS (Temporal Automatic Relation Discovery in Sequences). The controller of TARDIS can store a selective set of embeddings of its own previous hidden states into an external memory and revisit them as and when needed. For TARDIS, memory acts as a storage for wormhole connections to the past to propagate the gradients more effectively and it helps to learn the temporal dependencies. The memory structure of TARDIS has similarities to both Neural Turing Machines (NTM) and Dynamic Neural Turing Machines (D-NTM), but both read and write operations of TARDIS are simpler and more efficient. We use discrete addressing for read/write operations which helps to substantially to reduce the vanishing gradient problem with very long sequences. Read and write operations in TARDIS are tied with a heuristic once the memory becomes full, and this makes the learning problem simpler when compared to NTM or D-NTM type of architectures. We provide a detailed analysis on the gradient propagation in general for MANNs. We evaluate our models on different long-term dependency tasks and report competitive results in all of them.",
"title": ""
},
{
"docid": "890b1ed209b3e34c5b460dce310ee08f",
"text": "INTRODUCTION\nThe adequate use of compression in venous leg ulcer treatment is equally important to patients as well as clinicians. Currently, there is a lack of clarity on contraindications, risk factors, adverse events and complications, when applying compression therapy for venous leg ulcer patients.\n\n\nMETHODS\nThe project aimed to optimize prevention, treatment and maintenance approaches by recognizing contraindications, risk factors, adverse events and complications, when applying compression therapy for venous leg ulcer patients. A literature review was conducted of current guidelines on venous leg ulcer prevention, management and maintenance.\n\n\nRESULTS\nSearches took place from 29th February 2016 to 30th April 2016 and were prospectively limited to publications in the English and German languages and publication dates were between January 2009 and April 2016. Twenty Guidelines, clinical pathways and consensus papers on compression therapy for venous leg ulcer treatment and for venous disease, were included. Guidelines agreed on the following absolute contraindications: Arterial occlusive disease, heart failure and ankle brachial pressure index (ABPI) <0.5, but gave conflicting recommendations on relative contraindications, risks and adverse events. Moreover definitions were unclear and not consistent.\n\n\nCONCLUSIONS\nEvidence-based guidance is needed to inform clinicians on risk factor, adverse effects, complications and contraindications. ABPI values need to be specified and details should be given on the type of compression that is safe to use. Ongoing research challenges the present recommendations, shifting some contraindications into a list of potential indications. Complications of compression can be prevented when adequate assessment is performed and clinicians are skilled in applying compression.",
"title": ""
},
{
"docid": "c3a7d3fa13bed857795c4cce2e992b87",
"text": "Healthcare consumers, researchers, patients and policy makers increasingly use systematic reviews (SRs) to aid their decision-making process. However, the conduct of SRs can be a time-consuming and resource-intensive task. Often, clinical practice guideline developers or other decision-makers need to make informed decisions in a timely fashion (e.g. outbreaks of infection, hospital-based health technology assessments). Possible approaches to address the issue of timeliness in the production of SRs are to (a) implement process parallelisation, (b) adapt and apply innovative technologies, and/or (c) modify SR processes (e.g. study eligibility criteria, search sources, data extraction or quality assessment). Highly parallelised systematic reviewing requires substantial resources to support a team of experienced information specialists, reviewers and methodologists working alongside with clinical content experts to minimise the time for completing individual review steps while maximising the parallel progression of multiple steps. Effective coordination and management within the team and across external stakeholders are essential elements of this process. Emerging innovative technologies have a great potential for reducing workload and improving efficiency of SR production. The most promising areas of application would be to allow automation of specific SR tasks, in particular if these tasks are time consuming and resource intensive (e.g. language translation, study selection, data extraction). Modification of SR processes involves restricting, truncating and/or bypassing one or more SR steps, which may risk introducing bias to the review findings. Although the growing experiences in producing various types of rapid reviews (RR) and the accumulation of empirical studies exploring potential bias associated with specific SR tasks have contributed to the methodological development for expediting SR production, there is still a dearth of research examining the actual impact of methodological modifications and comparing the findings between RRs and SRs. This evidence would help to inform as to which SR tasks can be accelerated or truncated and to what degree, while maintaining the validity of review findings. Timely delivered SRs can be of value in informing healthcare decisions and recommendations, especially when there is practical urgency and there is no other relevant synthesised evidence.",
"title": ""
},
{
"docid": "7c5a80b0fef3e0e1fe5ce314b6e5aaf4",
"text": "OBJECTIVES\nGiven the large-scale adoption and deployment of mobile phones by health services and frontline health workers (FHW), we aimed to review and synthesise the evidence on the feasibility and effectiveness of mobile-based services for healthcare delivery.\n\n\nMETHODS\nFive databases - MEDLINE, EMBASE, Global Health, Google Scholar and Scopus - were systematically searched for relevant peer-reviewed articles published between 2000 and 2013. Data were extracted and synthesised across three themes as follows: feasibility of use of mobile tools by FHWs, training required for adoption of mobile tools and effectiveness of such interventions.\n\n\nRESULTS\nForty-two studies were included in this review. With adequate training, FHWs were able to use mobile phones to enhance various aspects of their work activities. Training of FHWs to use mobile phones for healthcare delivery ranged from a few hours to about 1 week. Five key thematic areas for the use of mobile phones by FHWs were identified as follows: data collection and reporting, training and decision support, emergency referrals, work planning through alerts and reminders, and improved supervision of and communication between healthcare workers. Findings suggest that mobile based data collection improves promptness of data collection, reduces error rates and improves data completeness. Two methodologically robust studies suggest that regular access to health information via SMS or mobile-based decision-support systems may improve the adherence of the FHWs to treatment algorithms. The evidence on the effectiveness of the other approaches was largely descriptive and inconclusive.\n\n\nCONCLUSIONS\nUse of mHealth strategies by FHWs might offer some promising approaches to improving healthcare delivery; however, the evidence on the effectiveness of such strategies on healthcare outcomes is insufficient.",
"title": ""
},
{
"docid": "d7bc62e7fca922f9b97e42deff85d010",
"text": "In this paper, we propose an extractive multi-document summarization (MDS) system using joint optimization and active learning for content selection grounded in user feedback. Our method interactively obtains user feedback to gradually improve the results of a state-of-the-art integer linear programming (ILP) framework for MDS. Our methods complement fully automatic methods in producing highquality summaries with a minimum number of iterations and feedbacks. We conduct multiple simulation-based experiments and analyze the effect of feedbackbased concept selection in the ILP setup in order to maximize the user-desired content in the summary.",
"title": ""
},
{
"docid": "c7f23ddb60394659cdf48ea4df68ae6b",
"text": "OBJECTIVES\nWe hypothesized reduction of 30 days' in-hospital morbidity, mortality, and length of stay postimplementation of the World Health Organization's Surgical Safety Checklist (SSC).\n\n\nBACKGROUND\nReductions of morbidity and mortality have been reported after SSC implementation in pre-/postdesigned studies without controls. Here, we report a randomized controlled trial of the SSC.\n\n\nMETHODS\nA stepped wedge cluster randomized controlled trial was conducted in 2 hospitals. We examined effects on in-hospital complications registered by International Classification of Diseases, Tenth Revision codes, length of stay, and mortality. The SSC intervention was sequentially rolled out in a random order until all 5 clusters-cardiothoracic, neurosurgery, orthopedic, general, and urologic surgery had received the Checklist. Data were prospectively recorded in control and intervention stages during a 10-month period in 2009-2010.\n\n\nRESULTS\nA total of 2212 control procedures were compared with 2263 SCC procedures. The complication rates decreased from 19.9% to 11.5% (P < 0.001), with absolute risk reduction 8.4 (95% confidence interval, 6.3-10.5) from the control to the SSC stages. Adjusted for possible confounding factors, the SSC effect on complications remained significant with odds ratio 1.95 (95% confidence interval, 1.59-2.40). Mean length of stay decreased by 0.8 days with SCC utilization (95% confidence interval, 0.11-1.43). In-hospital mortality decreased significantly from 1.9% to 0.2% in 1 of the 2 hospitals post-SSC implementation, but the overall reduction (1.6%-1.0%) across hospitals was not significant.\n\n\nCONCLUSIONS\nImplementation of the WHO SSC was associated with robust reduction in morbidity and length of in-hospital stay and some reduction in mortality.",
"title": ""
},
{
"docid": "0a929fa28caa0138c1283d7f54ecccc9",
"text": "While predictions abound that electronic books will supplant traditional paper-based books, many people bemoan the coming loss of the book as cultural artifact. In this project we deliberately keep the affordances of paper books while adding electronic augmentation. The Listen Reader combines the look and feel of a real book - a beautiful binding, paper pages and printed images and text - with the rich, evocative quality of a movie soundtrack. The book's multi-layered interactive soundtrack consists of music and sound effects. Electric field sensors located in the book binding sense the proximity of the reader's hands and control audio parameters, while RFID tags embedded in each page allow fast, robust page identification.\nThree different Listen Readers were built as part of a six-month museum exhibit, with more than 350,000 visitors. This paper discusses design, implementation, and lessons learned through the iterative design process, observation, and visitor interviews.",
"title": ""
},
{
"docid": "bc1efec6824aae80c9cae7ea2b2c4842",
"text": "State-of-the-art natural language processing systems rely on supervision in the form of annotated data to learn competent models. These models are generally trained on data in a single language (usually English), and cannot be directly used beyond that language. Since collecting data in every language is not realistic, there has been a growing interest in crosslingual language understanding (XLU) and low-resource cross-language transfer. In this work, we construct an evaluation set for XLU by extending the development and test sets of the Multi-Genre Natural Language Inference Corpus (MultiNLI) to 15 languages, including low-resource languages such as Swahili and Urdu. We hope that our dataset, dubbed XNLI, will catalyze research in cross-lingual sentence understanding by providing an informative standard evaluation task. In addition, we provide several baselines for multilingual sentence understanding, including two based on machine translation systems, and two that use parallel data to train aligned multilingual bag-of-words and LSTM encoders. We find that XNLI represents a practical and challenging evaluation suite, and that directly translating the test data yields the best performance among available baselines.",
"title": ""
},
{
"docid": "45986bb7bb041f50fac577e562347b61",
"text": "In this paper, we study the human locomotor adaptation to the action of a powered exoskeleton providing assistive torque at the user's hip during walking. To this end, we propose a controller that provides the user's hip with a fraction of the nominal torque profile, adapted to the specific gait features of the user from Winter's reference data . The assistive controller has been implemented on the ALEX II exoskeleton and tested on ten healthy subjects. Experimental results show that when assisted by the exoskeleton, users can reduce the muscle effort compared to free walking. Despite providing assistance only to the hip joint, both hip and ankle muscles significantly reduced their activation, indicating a clear tradeoff between hip and ankle strategy to propel walking.",
"title": ""
},
{
"docid": "7916a261319dad5f257a0b8e0fa97fec",
"text": "INTRODUCTION\nPreliminary research has indicated that recreational ketamine use may be associated with marked cognitive impairments and elevated psychopathological symptoms, although no study to date has determined how these are affected by differing frequencies of use or whether they are reversible on cessation of use. In this study we aimed to determine how variations in ketamine use and abstention from prior use affect neurocognitive function and psychological wellbeing.\n\n\nMETHOD\nWe assessed a total of 150 individuals: 30 frequent ketamine users, 30 infrequent ketamine users, 30 ex-ketamine users, 30 polydrug users and 30 controls who did not use illicit drugs. Cognitive tasks included spatial working memory, pattern recognition memory, the Stockings of Cambridge (a variant of the Tower of London task), simple vigilance and verbal and category fluency. Standardized questionnaires were used to assess psychological wellbeing. Hair analysis was used to verify group membership.\n\n\nRESULTS\nFrequent ketamine users were impaired on spatial working memory, pattern recognition memory, Stockings of Cambridge and category fluency but exhibited preserved verbal fluency and prose recall. There were no differences in the performance of the infrequent ketamine users or ex-users compared to the other groups. Frequent users showed increased delusional, dissociative and schizotypal symptoms which were also evident to a lesser extent in infrequent and ex-users. Delusional symptoms correlated positively with the amount of ketamine used currently by the frequent users.\n\n\nCONCLUSIONS\nFrequent ketamine use is associated with impairments in working memory, episodic memory and aspects of executive function as well as reduced psychological wellbeing. 'Recreational' ketamine use does not appear to be associated with distinct cognitive impairments although increased levels of delusional and dissociative symptoms were observed. As no performance decrements were observed in the ex-ketamine users, it is possible that the cognitive impairments observed in the frequent ketamine group are reversible upon cessation of ketamine use, although delusional symptoms persist.",
"title": ""
},
{
"docid": "611f7b5564c9168f73f778e7466d1709",
"text": "A fold-back current-limit circuit, with load-insensitive quiescent current characteristic for CMOS low dropout regulator (LDO), is proposed in this paper. This method has been designed in 0.35 µm CMOS technology and verified by Hspice simulation. The quiescent current of the LDO is 5.7 µA at 100-mA load condition. It is only 2.2% more than it in no-load condition, 5.58 µA. The maximum current limit is set to be 197 mA, and the short-current limit is 77 mA. Thus, the power consumption can be saved up to 61% at the short-circuit condition, which also decreases the risk of damaging the power transistor. Moreover, the thermal protection can be simplified and the LDO will be more reliable.",
"title": ""
},
{
"docid": "9dc4da444e4df3f63f37b1928e36464c",
"text": "This paper presents and studies various selected literature primarily from conference proceedings, journals and clinical tests of the robotic, mechatronics, neurology and biomedical engineering of rehabilitation robotic systems. The present paper focuses of three main categories: types of rehabilitation robots, key technologies with current issues and future challenges. Literature on fundamental research with some examples from commercialized robots and new robot development projects related to rehabilitation are introduced. Most of the commercialized robots presented in this paper are well known especially to robotics engineers and scholars in the robotic field, but are less known to humanities scholars. The field of rehabilitation robot research is expanding; in light of this, some of the current issues and future challenges in rehabilitation robot engineering are recalled, examined and clarified with future directions. This paper is concluded with some recommendations with respect to rehabilitation robots.",
"title": ""
},
{
"docid": "43cf9c485c541afa84e3ee5ce4d39376",
"text": "With the tremendous popularity of PDF format, recognizing mathematical formulas in PDF documents becomes a new and important problem in document analysis field. In this paper, we present a method of embedded mathematical formula identification in PDF documents, based on Support Vector Machine (SVM). The method first segments text lines into words, and then classifies each word into two classes, namely formula or ordinary text. Various features of embedded formulas, including geometric layout, character and context content, are utilized to build a robust and adaptable SVM classifier. Embedded formulas are then extracted through merging the words labeled as formulas. Experimental results show good performance of the proposed method. Furthermore, the method has been successfully incorporated into a commercial software package for large-scale e-Book production.",
"title": ""
},
{
"docid": "cfa58ab168beb2d52fe6c2c47488e93a",
"text": "In this paper we present our approach to automatically identify the subjectivity, polarity and irony of Italian Tweets. Our system which reaches and outperforms the state of the art in Italian is well adapted for different domains since it uses abstract word features instead of bag of words. We also present experiments carried out to study how Italian Sentiment Analysis systems react to domain changes. We show that bag of words approaches commonly used in Sentiment Analysis do not adapt well to domain changes.",
"title": ""
},
{
"docid": "2215fd5b4f1e884a66b62675c8c92d33",
"text": "In the context of structural optimization we propose a new numerical method based on a combination of the classical shape derivative and of the level-set method for front propagation. We implement this method in two and three space dimensions for a model of linear or nonlinear elasticity. We consider various objective functions with weight and perimeter constraints. The shape derivative is computed by an adjoint method. The cost of our numerical algorithm is moderate since the shape is captured on a fixed Eulerian mesh. Although this method is not specifically designed for topology optimization, it can easily handle topology changes. However, the resulting optimal shape is strongly dependent on the initial guess. 2003 Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "a7dddda96d65147c6d3e47df2757e329",
"text": "Today, a large number of audio features exists in audio retrieval for different purposes, such as automatic speech recognition, music information retrieval, audio segmentation, and environmental sound retrieval. The goal of this paper is to review latest research in the context of audio feature extraction and to give an application-independent overview of the most important existing techniques. We survey state-of-the-art features from various domains and propose a novel taxonomy for the organization of audio features. Additionally, we identify the building blocks of audio features and propose a scheme that allows for the description of arbitrary features. We present an extensive literature survey and provide more than 200 references to relevant high quality publications.",
"title": ""
},
{
"docid": "c5cb4f6b5bc524bad610e855105c1b99",
"text": "The authors examined how an applicant's handshake influences hiring recommendations formed during the employment interview. A sample of 98 undergraduate students provided personality measures and participated in mock interviews during which the students received ratings of employment suitability. Five trained raters independently evaluated the quality of the handshake for each participant. Quality of handshake was related to interviewer hiring recommendations. Path analysis supported the handshake as mediating the effect of applicant extraversion on interviewer hiring recommendations, even after controlling for differences in candidate physical appearance and dress. Although women received lower ratings for the handshake, they did not on average receive lower assessments of employment suitability. Exploratory analysis suggested that the relationship between a firm handshake and interview ratings may be stronger for women than for men.",
"title": ""
},
{
"docid": "d15e27ef0225d1f178b034534b57856b",
"text": "We introduce a novel joint sparse representation based multi-view automatic target recognition (ATR) method, which can not only handle multi-view ATR without knowing the pose but also has the advantage of exploiting the correlations among the multiple views of the same physical target for a single joint recognition decision. Extensive experiments have been carried out on moving and stationary target acquisition and recognition (MSTAR) public database to evaluate the proposed method compared with several state-of-the-art methods such as linear support vector machine (SVM), kernel SVM, as well as a sparse representation based classifier (SRC). Experimental results demonstrate that the proposed joint sparse representation ATR method is very effective and performs robustly under variations such as multiple joint views, depression, azimuth angles, target articulations, as well as configurations.",
"title": ""
},
{
"docid": "2b7a8590fe5e73d254a5be2ba3c1ee5b",
"text": "High resolution magnetic resonance (MR) imaging is desirable in many clinical applications due to its contribution to more accurate subsequent analyses and early clinical diagnoses. Single image super resolution (SISR) is an effective and cost efficient alternative technique to improve the spatial resolution of MR images. In the past few years, SISR methods based on deep learning techniques, especially convolutional neural networks (CNNs), have achieved state-of-the-art performance on natural images. However, the information is gradually weakened and training becomes increasingly difficult as the network deepens. The problem is more serious for medical images because lacking high quality and effective training samples makes deep models prone to underfitting or overfitting. Nevertheless, many current models treat the hierarchical features on different channels equivalently, which is not helpful for the models to deal with the hierarchical features discriminatively and targetedly. To this end, we present a novel channel splitting network (CSN) to ease the representational burden of deep models. The proposed CSN model divides the hierarchical features into two branches, i.e., residual branch and dense branch, with different information transmissions. The residual branch is able to promote feature reuse, while the dense branch is beneficial to the exploration of new features. Besides, we also adopt the merge-and-run mapping to facilitate information integration between different branches. Extensive experiments on various MR images, including proton density (PD), T1 and T2 images, show that the proposed CSN model achieves superior performance over other state-of-the-art SISR methods.",
"title": ""
}
] | scidocsrr |
37a76d3b6c71ef173133d68ba0809244 | Printflatables: Printing Human-Scale, Functional and Dynamic Inflatable Objects | [
{
"docid": "bf83b9fef9b4558538b2207ba57b4779",
"text": "This paper presents preliminary results for the design, development and evaluation of a hand rehabilitation glove fabricated using soft robotic technology. Soft actuators comprised of elastomeric materials with integrated channels that function as pneumatic networks (PneuNets), are designed and geometrically analyzed to produce bending motions that can safely conform with the human finger motion. Bending curvature and force response of these actuators are investigated using geometrical analysis and a finite element model (FEM) prior to fabrication. The fabrication procedure of the chosen actuator is described followed by a series of experiments that mechanically characterize the actuators. The experimental data is compared to results obtained from FEM simulations showing good agreement. Finally, an open-palm glove design and the integration of the actuators to it are described, followed by a qualitative evaluation study.",
"title": ""
}
] | [
{
"docid": "f136e875f021ea3ea67a87c6d0b1e869",
"text": "Platelet-rich plasma (PRP) has been utilized for many years as a regenerative agent capable of inducing vascularization of various tissues using blood-derived growth factors. Despite this, drawbacks mostly related to the additional use of anti-coagulants found in PRP have been shown to inhibit the wound healing process. For these reasons, a novel platelet concentrate has recently been developed with no additives by utilizing lower centrifugation speeds. The purpose of this study was therefore to investigate osteoblast behavior of this novel therapy (injectable-platelet-rich fibrin; i-PRF, 100% natural with no additives) when compared to traditional PRP. Human primary osteoblasts were cultured with either i-PRF or PRP and compared to control tissue culture plastic. A live/dead assay, migration assay as well as a cell adhesion/proliferation assay were investigated. Furthermore, osteoblast differentiation was assessed by alkaline phosphatase (ALP), alizarin red and osteocalcin staining, as well as real-time PCR for genes encoding Runx2, ALP, collagen1 and osteocalcin. The results showed that all cells had high survival rates throughout the entire study period irrespective of culture-conditions. While PRP induced a significant 2-fold increase in osteoblast migration, i-PRF demonstrated a 3-fold increase in migration when compared to control tissue-culture plastic and PRP. While no differences were observed for cell attachment, i-PRF induced a significantly higher proliferation rate at three and five days when compared to PRP. Furthermore, i-PRF induced significantly greater ALP staining at 7 days and alizarin red staining at 14 days. A significant increase in mRNA levels of ALP, Runx2 and osteocalcin, as well as immunofluorescent staining of osteocalcin was also observed in the i-PRF group when compared to PRP. In conclusion, the results from the present study favored the use of the naturally-formulated i-PRF when compared to traditional PRP with anti-coagulants. Further investigation into the direct role of fibrin and leukocytes contained within i-PRF are therefore warranted to better elucidate their positive role in i-PRF on tissue wound healing.",
"title": ""
},
{
"docid": "2ce4d585edd54cede6172f74cf9ab8bb",
"text": "Enterprise resource planning (ERP) systems have been widely implemented by numerous firms throughout the industrial world. While success stories of ERP implementation abound due to its potential in resolving the problem of fragmented information, a substantial number of these implementations fail to meet the goals of the organization. Some are abandoned altogether and others contribute to the failure of an organization. This article seeks to identify the critical factors of ERP implementation and uses statistical analysis to further delineate the patterns of adoption of the various concepts. A cross-sectional mail survey was mailed to business executives who have experience in the implementation of ERP systems. The results of this study provide empirical evidence that the theoretical constructs of ERP implementation are followed at varying levels. It offers some fresh insights into the current practice of ERP implementation. In addition, this study fills the need for ERP implementation constructs that can be utilized for further study of this important topic.",
"title": ""
},
{
"docid": "64c1c37422037fc9156db42cdcdbe7fe",
"text": "[Context] It is an enigma that agile projects can succeed ‘without requirements’ when weak requirements engineering is a known cause for project failures. While agile development projects often manage well without extensive requirements test cases are commonly viewed as requirements and detailed requirements are documented as test cases. [Objective] We have investigated this agile practice of using test cases as requirements to understand how test cases can support the main requirements activities, and how this practice varies. [Method] We performed an iterative case study at three companies and collected data through 14 interviews and 2 focus groups. [Results] The use of test cases as requirements poses both benefits and challenges when eliciting, validating, verifying, and managing requirements, and when used as a documented agreement. We have identified five variants of the test-cases-as-requirements practice, namely de facto, behaviour-driven, story-test driven, stand-alone strict and stand-alone manual for which the application of the practice varies concerning the time frame of requirements documentation, the requirements format, the extent to which the test cases are a machine executable specification and the use of tools which provide specific support for the practice of using test cases as requirements. [Conclusions] The findings provide empirical insight into how agile development projects manage and communicate requirements. The identified variants of the practice of using test cases as requirements can be used to perform in-depth investigations into agile requirements engineering. Practitioners can use the provided recommendations as a guide in designing and improving their agile requirements practices based on project characteristics such as number of stakeholders and rate of change.",
"title": ""
},
{
"docid": "b169e0e76f26db1f08cd84524aa10a53",
"text": "A very lightweight, broad-band, dual polarized antenna array with 128 elements for the frequency range from 7 GHz to 18 GHz has been designed, manufactured and measured. The total gain at the center frequency was measured to be 20 dBi excluding feeding network losses.",
"title": ""
},
{
"docid": "9520b99708d905d3713867fac14c3814",
"text": "When people work together to analyze a data set, they need to organize their findings, hypotheses, and evidence, share that information with their collaborators, and coordinate activities amongst team members. Sharing externalizations (recorded information such as notes) could increase awareness and assist with team communication and coordination. However, we currently know little about how to provide tool support for this sort of sharing. We explore how linked common work (LCW) can be employed within a `collaborative thinking space', to facilitate synchronous collaborative sensemaking activities in Visual Analytics (VA). Collaborative thinking spaces provide an environment for analysts to record, organize, share and connect externalizations. Our tool, CLIP, extends earlier thinking spaces by integrating LCW features that reveal relationships between collaborators' findings. We conducted a user study comparing CLIP to a baseline version without LCW. Results demonstrated that LCW significantly improved analytic outcomes at a collaborative intelligence task. Groups using CLIP were also able to more effectively coordinate their work, and held more discussion of their findings and hypotheses. LCW enabled them to maintain awareness of each other's activities and findings and link those findings to their own work, preventing disruptive oral awareness notifications.",
"title": ""
},
{
"docid": "910a416dc736ec3566583c57123ac87c",
"text": "Internet of Things (IoT) is one of the greatest technology revolutions in the history. Due to IoT potential, daily objects will be consciously worked in harmony with optimized performances. However, today, technology is not ready to fully bring its power to our daily life because of huge data analysis requirements in instant time. On the other hand, the powerful data management of cloud computing gives IoT an opportunity to make the revolution in our life. However, the traditional cloud computing server schedulers are not ready to provide services to IoT because IoT consists of a number of heterogeneous devices and applications which are far away from standardization. Therefore, to meet the expectations of users, the traditional cloud computing server schedulers should be improved to efficiently schedule and allocate IoT requests. There are several proposed scheduling algorithms for cloud computing in the literature. However, these scheduling algorithms are limited because of considering neither heterogeneous servers nor dynamic scheduling approach for different priority requests. Our objective is to propose Husnu S. Narman [email protected] 1 Holcombe Department of Electrical and Computer Engineering, Clemson University, Clemson, SC, 29634, USA 2 Department of Computer Science and Engineering, Bangladesh University of Engineering and Technology, Zahir Raihan Rd, Dhaka, 1000, Bangladesh 3 School of Computer Science, University of Oklahoma, Norman, OK, 73019, USA dynamic dedicated server scheduling for heterogeneous and homogeneous systems to efficiently provide desired services by considering priorities of requests. Results show that the proposed scheduling algorithm improves throughput up to 40 % in heterogeneous and homogeneous cloud computing systems for IoT requests. Our proposed scheduling algorithm and related analysis will help cloud service providers build efficient server schedulers which are adaptable to homogeneous and heterogeneous environments byconsidering systemperformancemetrics, such as drop rate, throughput, and utilization in IoT.",
"title": ""
},
{
"docid": "dac5cebcbc14b82f7b8df977bed0c9d8",
"text": "While blockchain services hold great promise to improve many different industries, there are significant cybersecurity concerns which must be addressed. In this paper, we investigate security considerations for an Ethereum blockchain hosting a distributed energy management application. We have simulated a microgrid with ten buildings in the northeast U.S., and results of the transaction distribution and electricity utilization are presented. We also present the effects on energy distribution when one or two smart meters have their identities corrupted. We then propose a new approach to digital identity management that would require smart meters to authenticate with the blockchain ledger and mitigate identity-spoofing attacks. Applications of this approach to defense against port scans and DDoS, attacks are also discussed.",
"title": ""
},
{
"docid": "e5bf05ae6700078dda83eca8d2f65cd4",
"text": "We propose a simple solution to use a single Neural Machine Translation (NMT) model to translate between multiple languages. Our solution requires no changes to the model architecture from a standard NMT system but instead introduces an artificial token at the beginning of the input sentence to specify the required target language. Using a shared wordpiece vocabulary, our approach enables Multilingual NMT systems using a single model. On the WMT’14 benchmarks, a single multilingual model achieves comparable performance for English→French and surpasses state-of-theart results for English→German. Similarly, a single multilingual model surpasses state-of-the-art results for French→English and German→English on WMT’14 and WMT’15 benchmarks, respectively. On production corpora, multilingual models of up to twelve language pairs allow for better translation of many individual pairs. Our models can also learn to perform implicit bridging between language pairs never seen explicitly during training, showing that transfer learning and zero-shot translation is possible for neural translation. Finally, we show analyses that hints at a universal interlingua representation in our models and also show some interesting examples when mixing languages.",
"title": ""
},
{
"docid": "c1fecb605dcabbd411e3782c15fd6546",
"text": "Neuropathic pain is a debilitating form of chronic pain that affects 6.9-10% of the population. Health-related quality-of-life is impeded by neuropathic pain, which not only includes physical impairment, but the mental wellbeing of the patient is also hindered. A reduction in both physical and mental wellbeing bares economic costs that need to be accounted for. A variety of medications are in use for the treatment of neuropathic pain, such as calcium channel α2δ agonists, serotonin/noradrenaline reuptake inhibitors and tricyclic antidepressants. However, recent studies have indicated a lack of efficacy regarding the aforementioned medication. There is increasing clinical and pre-clinical evidence that can point to the use of ketamine, an “old” anaesthetic, in the management of neuropathic pain. Conversely, to see ketamine being used in neuropathic pain, there needs to be more conclusive evidence exploring the long-term effects of sub-anesthetic ketamine.",
"title": ""
},
{
"docid": "5b463701f83f7e6651260c8f55738146",
"text": "Heart disease diagnosis is a complex task which requires much experience and knowledge. Traditional way of predicting Heart disease is doctor’s examination or number of medical tests such as ECG, Stress Test, and Heart MRI etc. Nowadays, Health care industry contains huge amount of heath care data, which contains hidden information. This hidden information is useful for making effective decisions. Computer based information along with advanced Data mining techniques are used for appropriate results. Neural network is widely used tool for predicting Heart disease diagnosis. In this research paper, a Heart Disease Prediction system (HDPS) is developed using Neural network. The HDPS system predicts the likelihood of patient getting a Heart disease. For prediction, the system uses sex, blood pressure, cholesterol like 13 medical parameters. Here two more parameters are added i.e. obesity and smoking for better accuracy. From the results, it has been seen that neural network predict heart disease with nearly 100% accuracy.",
"title": ""
},
{
"docid": "a2f1a10c0e89f6d63f493c267759fb8f",
"text": "BACKGROUND\nPatient portals tied to provider electronic health record (EHR) systems are increasingly popular.\n\n\nPURPOSE\nTo systematically review the literature reporting the effect of patient portals on clinical care.\n\n\nDATA SOURCES\nPubMed and Web of Science searches from 1 January 1990 to 24 January 2013.\n\n\nSTUDY SELECTION\nHypothesis-testing or quantitative studies of patient portals tethered to a provider EHR that addressed patient outcomes, satisfaction, adherence, efficiency, utilization, attitudes, and patient characteristics, as well as qualitative studies of barriers or facilitators, were included.\n\n\nDATA EXTRACTION\nTwo reviewers independently extracted data and addressed discrepancies through consensus discussion.\n\n\nDATA SYNTHESIS\nFrom 6508 titles, 14 randomized, controlled trials; 21 observational, hypothesis-testing studies; 5 quantitative, descriptive studies; and 6 qualitative studies were included. Evidence is mixed about the effect of portals on patient outcomes and satisfaction, although they may be more effective when used with case management. The effect of portals on utilization and efficiency is unclear, although patient race and ethnicity, education level or literacy, and degree of comorbid conditions may influence use.\n\n\nLIMITATION\nLimited data for most outcomes and an absence of reporting on organizational and provider context and implementation processes.\n\n\nCONCLUSION\nEvidence that patient portals improve health outcomes, cost, or utilization is insufficient. Patient attitudes are generally positive, but more widespread use may require efforts to overcome racial, ethnic, and literacy barriers. Portals represent a new technology with benefits that are still unclear. Better understanding requires studies that include details about context, implementation factors, and cost.",
"title": ""
},
{
"docid": "1eef21abdf14dc430b333cac71d4fe07",
"text": "The authors have developed an adaptive matched filtering algorithm based upon an artificial neural network (ANN) for QRS detection. They use an ANN adaptive whitening filter to model the lower frequencies of the electrocardiogram (ECG) which are inherently nonlinear and nonstationary. The residual signal which contains mostly higher frequency QRS complex energy is then passed through a linear matched filter to detect the location of the QRS complex. The authors developed an algorithm to adaptively update the matched filter template from the detected QRS complex in the ECG signal itself so that the template can be customized to an individual subject. This ANN whitening filter is very effective at removing the time-varying, nonlinear noise characteristic of ECG signals. The detection rate for a very noisy patient record in the MIT/BIH arrhythmia database is 99.5% with this approach, which compares favorably to the 97.5% obtained using a linear adaptive whitening filter and the 96.5% achieved with a bandpass filtering method.<<ETX>>",
"title": ""
},
{
"docid": "a0d4089e55a0a392a2784ae50b6fa779",
"text": "Organizations place a great deal of emphasis on hiring individuals who are a good fit for the organization and the job. Among the many ways that individuals are screened for a job, the employment interview is particularly prevalent and nearly universally used (Macan, 2009; Huffcutt and Culbertson, 2011). This Research Topic is devoted to a construct that plays a critical role in our understanding of job interviews: impression management (IM). In the interview context, IM describes behaviors an individual uses to influence the impression that others have of them (Bozeman and Kacmar, 1997). For instance, a job applicant can flatter an interviewer to be seen as likable (i.e., ingratiation), play up their qualifications and abilities to be seen as competent (i.e., self-promotion), or utilize excuses or justifications to make up for a negative event or error (i.e., defensive IM; Ellis et al., 2002). IM has emerged as a central theme in the interview literature over the last several decades (for reviews, see Posthuma et al., 2002; Levashina et al., 2014). Despite some pioneering early work (e.g., Schlenker, 1980; Leary and Kowalski, 1990; Stevens and Kristof, 1995), there has been a resurgence of interest in the area over the last decade. While the literature to date has set up a solid foundational knowledge about interview IM, there are a number of emerging trends and directions. In the following, we lay out some critical areas of inquiry in interview IM, and highlight how the innovative set of papers in this Research Topic is illustrative of these new directions.",
"title": ""
},
{
"docid": "5fbb54e63158066198cdf59e1a8e9194",
"text": "In this paper, we present results of a study of the data rate fairness among nodes within a LoRaWAN cell. Since LoRa/LoRaWAN supports various data rates, we firstly derive the fairest ratios of deploying each data rate within a cell for a fair collision probability. LoRa/LoRaWan, like other frequency modulation based radio interfaces, exhibits the capture effect in which only the stronger signal of colliding signals will be extracted. This leads to unfairness, where far nodes or nodes experiencing higher attenuation are less likely to see their packets received correctly. Therefore, we secondly develop a transmission power control algorithm to balance the received signal powers from all nodes regardless of their distances from the gateway for a fair data extraction. Simulations show that our approach achieves higher fairness in data rate than the state-of-art in almost all network configurations.",
"title": ""
},
{
"docid": "0a16eb6bfb41a708e7a660cbf4c445af",
"text": "Data from 1,010 lactating lactating, predominately component-fed Holstein cattle from 25 predominately tie-stall dairy farms in southwest Ontario were used to identify objective thresholds for defining hyperketonemia in lactating dairy cattle based on negative impacts on cow health, milk production, or both. Serum samples obtained during wk 1 and 2 postpartum and analyzed for beta-hydroxybutyrate (BHBA) concentrations that were used in analysis. Data were time-ordered so that the serum samples were obtained at least 1 d before the disease or milk recording events. Serum BHBA cutpoints were constructed at 200 micromol/L intervals between 600 and 2,000 micromol/L. Critical cutpoints for the health analysis were determined based on the threshold having the greatest sum of sensitivity and specificity for predicting the disease occurrence. For the production outcomes, models for first test day milk yield, milk fat, and milk protein percentage were constructed including covariates of parity, precalving body condition score, season of calving, test day linear score, and the random effect of herd. Each cutpoint was tested in these models to determine the threshold with the greatest impact and least risk of a type 1 error. Serum BHBA concentrations at or above 1,200 micromol/L in the first week following calving were associated with increased risks of subsequent displaced abomasum [odds ratio (OR) = 2.60] and metritis (OR = 3.35), whereas the critical threshold of BHBA in wk 2 postpartum on the risk of abomasal displacement was >or=1,800 micromol/L (OR = 6.22). The best threshold for predicting subsequent risk of clinical ketosis from serum obtained during wk 1 and wk 2 postpartum was 1,400 micromol/L of BHBA (OR = 4.25 and 5.98, respectively). There was no association between clinical mastitis and elevated serum BHBA in wk 1 or 2 postpartum, and there was no association between wk 2 BHBA and risk of metritis. Greater serum BHBA measured during the first and second week postcalving were associated with less milk yield, greater milk fat percentage, and less milk protein percentage on the first Dairy Herd Improvement test day of lactation. Impacts on first Dairy Herd Improvement test milk yield began at BHBA >or=1,200 micromol/L for wk 1 samples and >or=1,400 micromol/L for wk 2 samples. The greatest impact on yield occurred at 1,400 micromol/L (-1.88 kg/d) and 2,000 micromol/L (-3.3 kg/d) for sera from the first and second week postcalving, respectively. Hyperketonemia can be defined at 1,400 micromol/L of BHBA and in the first 2 wk postpartum increases disease risk and results in substantial loss of milk yield in early lactation.",
"title": ""
},
{
"docid": "4c563b09a10ce0b444edb645ce411d42",
"text": "Privacy and security are two important but seemingly contradictory objectives in a pervasive computing environment (PCE). On one hand, service providers want to authenticate legitimate users and make sure they are accessing their authorized services in a legal way. On the other hand, users want to maintain the necessary privacy without being tracked down for wherever they are and whatever they are doing. In this paper, a novel privacy preserving authentication and access control scheme to secure the interactions between mobile users and services in PCEs is proposed. The proposed scheme seamlessly integrates two underlying cryptographic primitives, namely blind signature and hash chain, into a highly flexible and lightweight authentication and key establishment protocol. The scheme provides explicit mutual authentication between a user and a service while allowing the user to anonymously interact with the service. Differentiated service access control is also enabled in the proposed scheme by classifying mobile users into different service groups. The correctness of the proposed authentication and key establishment protocol is formally verified based on Burrows-Abadi-Needham logic",
"title": ""
},
{
"docid": "9a30008cc270ac7a0bb1a0f12dca6187",
"text": "Recent advancements in deep neural networks for graph-structured data have led to state-of-the-art performance on recommender system benchmarks. However, making these methods practical and scalable to web-scale recommendation tasks with billions of items and hundreds of millions of users remains an unsolved challenge. Here we describe a large-scale deep recommendation engine that we developed and deployed at Pinterest. We develop a data-efficient Graph Convolutional Network (GCN) algorithm, which combines efficient random walks and graph convolutions to generate embeddings of nodes (i.e., items) that incorporate both graph structure as well as node feature information. Compared to prior GCN approaches, we develop a novel method based on highly efficient random walks to structure the convolutions and design a novel training strategy that relies on harder-and-harder training examples to improve robustness and convergence of the model. We also develop an efficient MapReduce model inference algorithm to generate embeddings using a trained model. Overall, we can train on and embed graphs that are four orders of magnitude larger than typical GCN implementations. We show how GCN embeddings can be used to make high-quality recommendations in various settings at Pinterest, which has a massive underlying graph with 3 billion nodes representing pins and boards, and 17 billion edges. According to offline metrics, user studies, as well as A/B tests, our approach generates higher-quality recommendations than comparable deep learning based systems. To our knowledge, this is by far the largest application of deep graph embeddings to date and paves the way for a new generation of web-scale recommender systems based on graph convolutional architectures.",
"title": ""
},
{
"docid": "4b8f59d1b416d4869ae38dbca0eaca41",
"text": "This study investigates high frequency currency trading with neural networks trained via Recurrent Reinforcement Learning (RRL). We compare the performance of single layer networks with networks having a hidden layer, and examine the impact of the fixed system parameters on performance. In general, we conclude that the trading systems may be effective, but the performance varies widely for different currency markets and this variability cannot be explained by simple statistics of the markets. Also we find that the single layer network outperforms the two layer network in this application.",
"title": ""
},
{
"docid": "ec7b348a0fe38afa02989a22aa9dcac2",
"text": "We propose a general framework for learning from labeled and unlabeled data on a directed graph in which the structure of the graph including the directionality of the edges is considered. The time complexity of the algorithm derived from this framework is nearly linear due to recently developed numerical techniques. In the absence of labeled instances, this framework can be utilized as a spectral clustering method for directed graphs, which generalizes the spectral clustering approach for undirected graphs. We have applied our framework to real-world web classification problems and obtained encouraging results.",
"title": ""
}
] | scidocsrr |
f044fe45667845e23a37450a4166419f | An effective voting method for circle detection | [
{
"docid": "40eaf943d6fa760b064a329254adc5db",
"text": "We introduce the Adaptive Hough Transform, AHT, as an efficient way of implementing the Hough Transform, HT, method for the detection of 2-D shapes. The AHT uses a small accumulator array and the idea of a flexible iterative \"coarse to fine\" accumulation and search strategy to identify significant peaks in the Hough parameter spaces. The method is substantially superior to the standard HT implementation in both storage and computational requirements. In this correspondence we illustrate the ideas of the AHT by tackling the problem of identifying linear and circular segments in images by searching for clusters of evidence in 2-D parameter spaces. We show that the method is robust to the addition of extraneous noise and can be used to analyze complex images containing more than one shape.",
"title": ""
}
] | [
{
"docid": "f7e779114a0eb67fd9e3dfbacf5110c9",
"text": "Online game is an increasingly popular source of entertainment for all ages, with relatively prevalent negative consequences. Addiction is a problem that has received much attention. This research aims to develop a measure of online game addiction for Indonesian children and adolescents. The Indonesian Online Game Addiction Questionnaire draws from earlier theories and research on the internet and game addiction. Its construction is further enriched by including findings from qualitative interviews and field observation to ensure appropriate expression of the items. The measure consists of 7 items with a 5-point Likert Scale. It is validated by testing 1,477 Indonesian junior and senior high school students from several schools in Manado, Medan, Pontianak, and Yogyakarta. The validation evidence is shown by item-total correlation and criterion validity. The Indonesian Online Game Addiction Questionnaire has good item-total correlation (ranging from 0.29 to 0.55) and acceptable reliability (α = 0.73). It is also moderately correlated with the participant's longest time record to play online games (r = 0.39; p<0.01), average days per week in playing online games (ρ = 0.43; p<0.01), average hours per days in playing online games (ρ = 0.41; p<0.01), and monthly expenditure for online games (ρ = 0.30; p<0.01). Furthermore, we created a clinical cut-off estimate by combining criteria and population norm. The clinical cut-off estimate showed that the score of 14 to 21 may indicate mild online game addiction, and the score of 22 and above may indicate online game addiction. Overall, the result shows that Indonesian Online Game Addiction Questionnaire has sufficient psychometric property for research use, as well as limited clinical application.",
"title": ""
},
{
"docid": "ea048488791219be809072862a061444",
"text": "Our object oriented programming approach have great ability to improve the programming behavior for modern system and software engineering but it does not give the proper interaction of real world .In real world , programming required powerful interlinking among properties and characteristics towards the various objects. Basically this approach of programming gives the better presentation of object with real world and provide the better relationship among the objects. I have explained the new concept of my neuro object oriented approach .This approach contains many new features like originty , new concept of inheritance , new concept of encapsulation , object relation with dimensions , originty relation with dimensions and time , category of NOOPA like high order thinking object and low order thinking object , differentiation model for achieving the various requirements from the user and a rotational model .",
"title": ""
},
{
"docid": "628c8b906e3db854ea92c021bb274a61",
"text": "Taxi demand prediction is an important building block to enabling intelligent transportation systems in a smart city. An accurate prediction model can help the city pre-allocate resources to meet travel demand and to reduce empty taxis on streets which waste energy and worsen the traffic congestion. With the increasing popularity of taxi requesting services such as Uber and Didi Chuxing (in China), we are able to collect large-scale taxi demand data continuously. How to utilize such big data to improve the demand prediction is an interesting and critical real-world problem. Traditional demand prediction methods mostly rely on time series forecasting techniques, which fail to model the complex non-linear spatial and temporal relations. Recent advances in deep learning have shown superior performance on traditionally challenging tasks such as image classification by learning the complex features and correlations from largescale data. This breakthrough has inspired researchers to explore deep learning techniques on traffic prediction problems. However, existing methods on traffic prediction have only considered spatial relation (e.g., using CNN) or temporal relation (e.g., using LSTM) independently. We propose a Deep Multi-View Spatial-Temporal Network (DMVST-Net) framework to model both spatial and temporal relations. Specifically, our proposed model consists of three views: temporal view (modeling correlations between future demand values with near time points via LSTM), spatial view (modeling local spatial correlation via local CNN), and semantic view (modeling correlations among regions sharing similar temporal patterns). Experiments on large-scale real taxi demand data demonstrate effectiveness of our approach over state-ofthe-art methods.",
"title": ""
},
{
"docid": "4f44b685adc7e63f18a40d0f3fc25585",
"text": "Computational Thinking (CT) has become popular in recent years and has been recognised as an essential skill for all, as members of the digital age. Many researchers have tried to define CT and have conducted studies about this topic. However, CT literature is at an early stage of maturity, and is far from either explaining what CT is, or how to teach and assess this skill. In the light of this state of affairs, the purpose of this study is to examine the purpose, target population, theoretical basis, definition, scope, type and employed research design of selected papers in the literature that have focused on computational thinking, and to provide a framework about the notion, scope and elements of CT. In order to reveal the literature and create the framework for computational thinking, an inductive qualitative content analysis was conducted on 125 papers about CT, selected according to pre-defined criteria from six different databases and digital libraries. According to the results, the main topics covered in the papers composed of activities (computerised or unplugged) that promote CT in the curriculum. The targeted population of the papers was mainly K-12. Gamed-based learning and constructivism were the main theories covered as the basis for CT papers. Most of the papers were written for academic conferences and mainly composed of personal views about CT. The study also identified the most commonly used words in the definitions and scope of CT, which in turn formed the framework of CT. The findings obtained in this study may not only be useful in the exploration of research topics in CT and the identification of CT in the literature, but also support those who need guidance for developing tasks or programs about computational thinking and informatics.",
"title": ""
},
{
"docid": "14fac379b3d4fdfc0024883eba8431b3",
"text": "PURPOSE\nTo summarize the literature addressing subthreshold or nondamaging retinal laser therapy (NRT) for central serous chorioretinopathy (CSCR) and to discuss results and trends that provoke further investigation.\n\n\nMETHODS\nAnalysis of current literature evaluating NRT with micropulse or continuous wave lasers for CSCR.\n\n\nRESULTS\nSixteen studies including 398 patients consisted of retrospective case series, prospective nonrandomized interventional case series, and prospective randomized clinical trials. All studies but one evaluated chronic CSCR, and laser parameters varied greatly between studies. Mean central macular thickness decreased, on average, by ∼80 μm by 3 months. Mean best-corrected visual acuity increased, on average, by about 9 letters by 3 months, and no study reported a decrease in acuity below presentation. No retinal complications were observed with the various forms of NRT used, but six patients in two studies with micropulse laser experienced pigmentary changes in the retinal pigment epithelium attributed to excessive laser settings.\n\n\nCONCLUSION\nBased on the current evidence, NRT demonstrates efficacy and safety in 12-month follow-up in patients with chronic and possibly acute CSCR. The NRT would benefit from better standardization of the laser settings and understanding of mechanisms of action, as well as further prospective randomized clinical trials.",
"title": ""
},
{
"docid": "dfa1269878b384b24c7ba6aea6a11373",
"text": "Transfer printing represents a set of techniques for deterministic assembly of micro-and nanomaterials into spatially organized, functional arrangements with two and three-dimensional layouts. Such processes provide versatile routes not only to test structures and vehicles for scientific studies but also to high-performance, heterogeneously integrated functional systems, including those in flexible electronics, three-dimensional and/or curvilinear optoelectronics, and bio-integrated sensing and therapeutic devices. This article summarizes recent advances in a variety of transfer printing techniques, ranging from the mechanics and materials aspects that govern their operation to engineering features of their use in systems with varying levels of complexity. A concluding section presents perspectives on opportunities for basic and applied research, and on emerging use of these methods in high throughput, industrial-scale manufacturing.",
"title": ""
},
{
"docid": "8fc8f7e62cf9e9f89957b33c6e45063c",
"text": "A controller for a quadratic buck converter is given using average current-mode control. The converter has two filters; thus, it will exhibit fourth-order characteristic dynamics. The proposed scheme employs an inner loop that uses the current of the first inductor. This current can also be used for overload protection; therefore, the full benefits of current-mode control are maintained. For the outer loop, a conventional controller which provides good regulation characteristics is used. The design-oriented analytic results allow the designer to easily pinpoint the control circuit parameters that optimize the converter's performance. Experimental results are given for a 28 W switching regulator where current-mode control and voltage-mode control are compared.",
"title": ""
},
{
"docid": "ac2f02b46a885cf662c41a16f976819e",
"text": "This paper presents a conceptual framework for security engineering, with a strong focus on security requirements elicitation and analysis. This conceptual framework establishes a clear-cut vocabulary and makes explicit the interrelations between the different concepts and notions used in security engineering. Further, we apply our conceptual framework to compare and evaluate current security requirements engineering approaches, such as the Common Criteria, Secure Tropos, SREP, MSRA, as well as methods based on UML and problem frames. We review these methods and assess them according to different criteria, such as the general approach and scope of the method, its validation, and quality assurance capabilities. Finally, we discuss how these methods are related to the conceptual framework and to one another.",
"title": ""
},
{
"docid": "69d94a7beb7ed35cc9fdd9ea824a0096",
"text": "We introduce an interactive method to assess cataracts in the human eye by crafting an optical solution that measures the perceptual impact of forward scattering on the foveal region. Current solutions rely on highly-trained clinicians to check the back scattering in the crystallin lens and test their predictions on visual acuity tests. Close-range parallax barriers create collimated beams of light to scan through sub-apertures, scattering light as it strikes a cataract. User feedback generates maps for opacity, attenuation, contrast and sub-aperture point-spread functions. The goal is to allow a general audience to operate a portable high-contrast light-field display to gain a meaningful understanding of their own visual conditions. User evaluations and validation with modified camera optics are performed. Compiled data is used to reconstruct the individual's cataract-affected view, offering a novel approach for capturing information for screening, diagnostic, and clinical analysis.",
"title": ""
},
{
"docid": "d026ebfc24e3e48d0ddb373f71d63162",
"text": "The claustrum has been proposed as a possible neural candidate for the coordination of conscious experience due to its extensive ‘connectome’. Herein we propose that the claustrum contributes to consciousness by supporting the temporal integration of cortical oscillations in response to multisensory input. A close link between conscious awareness and interval timing is suggested by models of consciousness and conjunctive changes in meta-awareness and timing in multiple contexts and conditions. Using the striatal beatfrequency model of interval timing as a framework, we propose that the claustrum integrates varying frequencies of neural oscillations in different sensory cortices into a coherent pattern that binds different and overlapping temporal percepts into a unitary conscious representation. The proposed coordination of the striatum and claustrum allows for time-based dimensions of multisensory integration and decision-making to be incorporated into consciousness.",
"title": ""
},
{
"docid": "9b519ba8a3b32d7b5b8a117b2d4d06ca",
"text": "This article reviews the most current practice guidelines in the diagnosis and management of patients born with cleft lip and/or palate. Such patients frequently have multiple medical and social issues that benefit greatly from a team approach. Common challenges include feeding difficulty, nutritional deficiency, speech disorders, hearing problems, ear disease, dental anomalies, and both social and developmental delays, among others. Interdisciplinary evaluation and collaboration throughout a patient's development are essential.",
"title": ""
},
{
"docid": "67a8a8ef9111edd9c1fa88e7c59b6063",
"text": "The process of obtaining intravenous (IV) access, Venipuncture, is an everyday invasive procedure in medical settings and there are more than one billion venipuncture related procedures like blood draws, peripheral catheter insertions, intravenous therapies, etc. performed per year [3]. Excessive venipunctures are both time and resource consuming events causing anxiety, pain and distress in patients, or can lead to severe harmful injuries [8]. The major problem faced by the doctors today is difficulty in accessing veins for intra-venous drug delivery & other medical situations [3]. There is a need to develop vein detection devices which can clearly show veins. This project deals with the design development of non-invasive subcutaneous vein detection system and is implemented based on near infrared imaging and interfaced to a laptop to make it portable. A customized CCD camera is used for capturing the vein images and Computer Software modules (MATLAB & LabVIEW) is used for the processing [3].",
"title": ""
},
{
"docid": "3a4da0cf9f4fdcc1356d25ea1ca38ca4",
"text": "Almost all of the existing work on Named Entity Recognition (NER) consists of the following pipeline stages – part-of-speech tagging, segmentation, and named entity type classification. The requirement of hand-labeled training data on these stages makes it very expensive to extend to different domains and entity classes. Even with a large amount of hand-labeled data, existing techniques for NER on informal text, such as social media, perform poorly due to a lack of reliable capitalization, irregular sentence structure and a wide range of vocabulary. In this paper, we address the lack of hand-labeled training data by taking advantage of weak super vision signals. We present our approach in two parts. First, we propose a novel generative model that combines the ideas from Hidden Markov Model (HMM) and n-gram language models into what we call an N-gram Language Markov Model (NLMM). Second, we utilize large-scale weak supervision signals from sources such as Wikipedia titles and the corresponding click counts to estimate parameters in NLMM. Our model is simple and can be implemented without the use of Expectation Maximization or other expensive iterative training techniques. Even with this simple model, our approach to NER on informal text outperforms existing systems trained on formal English and matches state-of-the-art NER systems trained on hand-labeled Twitter messages. Because our model does not require hand-labeled data, we can adapt our system to other domains and named entity classes very easily. We demonstrate the flexibility of our approach by successfully applying it to the different domain of extracting food dishes from restaurant reviews with very little extra work.",
"title": ""
},
{
"docid": "704c62beaf6b9b09265c0daacde69abc",
"text": "This paper investigates discrimination capabilities in the texture of fundus images to differentiate between pathological and healthy images. For this purpose, the performance of local binary patterns (LBP) as a texture descriptor for retinal images has been explored and compared with other descriptors such as LBP filtering and local phase quantization. The goal is to distinguish between diabetic retinopathy (DR), age-related macular degeneration (AMD), and normal fundus images analyzing the texture of the retina background and avoiding a previous lesion segmentation stage. Five experiments (separating DR from normal, AMD from normal, pathological from normal, DR from AMD, and the three different classes) were designed and validated with the proposed procedure obtaining promising results. For each experiment, several classifiers were tested. An average sensitivity and specificity higher than 0.86 in all the cases and almost of 1 and 0.99, respectively, for AMD detection were achieved. These results suggest that the method presented in this paper is a robust algorithm for describing retina texture and can be useful in a diagnosis aid system for retinal disease screening.",
"title": ""
},
{
"docid": "c39ab37765fbafdbc2dd3bf70c801d27",
"text": "This paper presents the advantages in extending Classical T ensor Algebra (CTA), also known as Kronecker Algebra, to allow the definition of functions, i.e., functional dependencies among its operands. Such extended tensor algebra have been called Generalized Tenso r Algebra (GTA). Stochastic Automata Networks (SAN) and Superposed Generalized Stochastic Petri Ne ts (SGSPN) formalisms use such Kronecker representations. We show that SAN, which uses GTA, has the sa m application scope of SGSPN, which uses CTA. We also show that any SAN model with functions has at least one equivalent representation without functions. In fact, the use of functions, and conseq uently the GTA, is not really a “need” since there is an equivalence of formalisms, but in some cases it represe nts, in a computational cost point of view, some irrefutable “advantages”. Some modeling examples are pres ent d in order to draw comparisons between the memory needs and CPU time to the generation, and the solution of the presented models.",
"title": ""
},
{
"docid": "41b87466db128bee207dd157a9fef761",
"text": "Systems that enforce memory safety for today’s operating system kernels and other system software do not account for the behavior of low-level software/hardware interactions such as memory-mapped I/O, MMU configuration, and context switching. Bugs in such low-level interactions can lead to violations of the memory safety guarantees provided by a safe execution environment and can lead to exploitable vulnerabilities in system software . In this work, we present a set of program analysis and run-time instrumentation techniques that ensure that errors in these low-level operations do not violate the assumptions made by a safety checking system. Our design introduces a small set of abstractions and interfaces for manipulating processor state, kernel stacks, memory mapped I/O objects, MMU mappings, and self modifying code to achieve this goal, without moving resource allocation and management decisions out of the kernel. We have added these techniques to a compiler-based virtual machine called Secure Virtual Architecture (SVA), to which the standard Linux kernel has been ported previously. Our design changes to SVA required only an additional 100 lines of code to be changed in this kernel. Our experimental results show that our techniques prevent reported memory safety violations due to low-level Linux operations and that these violations are not prevented by SVA without our techniques . Moreover, the new techniques in this paper introduce very little overhead over and above the existing overheads of SVA. Taken together, these results indicate that it is clearly worthwhile to add these techniques to an existing memory safety system.",
"title": ""
},
{
"docid": "d7242f26b3d7c0f71c09cc2e3914b728",
"text": "In this paper, a new offline actor-critic learning algorithm is introduced: Sampled Policy Gradient (SPG). SPG samples in the action space to calculate an approximated policy gradient by using the critic to evaluate the samples. This sampling allows SPG to search the action-Q-value space more globally than deterministic policy gradient (DPG), enabling it to theoretically avoid more local optima. SPG is compared to Q-learning and the actor-critic algorithms CACLA and DPG in a pellet collection task and a self play environment in the game Agar.io. The online game Agar.io has become massively popular on the internet due to intuitive game design and the ability to instantly compete against players around the world. From the point of view of artificial intelligence this game is also very intriguing: The game has a continuous input and action space and allows to have diverse agents with complex strategies compete against each other. The experimental results show that Q-Learning and CACLA outperform a pre-programmed greedy bot in the pellet collection task, but all algorithms fail to outperform this bot in a fighting scenario. The SPG algorithm is analyzed to have great extendability through offline exploration and it matches DPG in performance even in its basic form without extensive sampling.",
"title": ""
},
{
"docid": "4be28b696296ff779c7391b2f8d3b0c4",
"text": "The rise of Digital B2B Marketing has presented us with new opportunities and challenges as compared to traditional e-commerce. B2B setup is different from B2C setup in many ways. Along with the contrasting buying entity (company vs. individual), there are dissimilarities in order size (few dollars in e-commerce vs. up to several thousands of dollars in B2B), buying cycle (few days in B2C vs. 6–18 months in B2B) and most importantly a presence of multiple decision makers (individual or family vs. an entire company). Due to easy availability of the data and bargained complexities, most of the existing literature has been set in the B2C framework and there are not many examples in the B2B context. We present a unique approach to model next likely action of B2B customers by observing a sequence of digital actions. In this paper, we propose a unique two-step approach to model next likely action using a novel ensemble method that aims to predict the best digital asset to target customers as a next action. The paper provides a unique approach to translate the propensity model at an email address level into a segment that can target a group of email addresses. In the first step, we identify the high propensity customers for a given asset using traditional and advanced multinomial classification techniques and use non-negative least squares to stack rank different assets based on the output for ensemble model. In the second step, we perform a penalized regression to reduce the number of coefficients and obtain the satisfactory segment variables. Using real world digital marketing campaign data, we further show that the proposed method outperforms the traditional classification methods.",
"title": ""
},
{
"docid": "af3a87d82c1f11a8a111ed4276020161",
"text": "In many cases, neurons process information carried by the precise timings of spikes. Here we show how neurons can learn to generate specific temporally precise output spikes in response to input patterns of spikes having precise timings, thus processing and memorizing information that is entirely temporally coded, both as input and as output. We introduce two new supervised learning rules for spiking neurons with temporal coding of information (chronotrons), one that provides high memory capacity (E-learning), and one that has a higher biological plausibility (I-learning). With I-learning, the neuron learns to fire the target spike trains through synaptic changes that are proportional to the synaptic currents at the timings of real and target output spikes. We study these learning rules in computer simulations where we train integrate-and-fire neurons. Both learning rules allow neurons to fire at the desired timings, with sub-millisecond precision. We show how chronotrons can learn to classify their inputs, by firing identical, temporally precise spike trains for different inputs belonging to the same class. When the input is noisy, the classification also leads to noise reduction. We compute lower bounds for the memory capacity of chronotrons and explore the influence of various parameters on chronotrons' performance. The chronotrons can model neurons that encode information in the time of the first spike relative to the onset of salient stimuli or neurons in oscillatory networks that encode information in the phases of spikes relative to the background oscillation. Our results show that firing one spike per cycle optimizes memory capacity in neurons encoding information in the phase of firing relative to a background rhythm.",
"title": ""
}
] | scidocsrr |
0ae27fdbbbfcd6caa4c720afb631f538 | Privacy-Preserving Deep Inference for Rich User Data on The Cloud | [
{
"docid": "0a968f1dcba70ab1a42c25b1a6ec2a5c",
"text": "In recent years, privacy-preserving data mining has been studied extensively, because of the wide proliferation of sensitive information on the internet. A number of algorithmic techniques have been designed for privacy-preserving data mining. In this paper, we provide a review of the state-of-the-art methods for privacy. We discuss methods for randomization, k-anonymization, and distributed privacy-preserving data mining. We also discuss cases in which the output of data mining applications needs to be sanitized for privacy-preservation purposes. We discuss the computational and theoretical limits associated with privacy-preservation over high dimensional data sets.",
"title": ""
}
] | [
{
"docid": "9e3de4720dade2bb73d78502d7cccc8b",
"text": "Skeletonization is a way to reduce dimensionality of digital objects. Here, we present an algorithm that computes the curve skeleton of a surface-like object in a 3D image, i.e., an object that in one of the three dimensions is at most twovoxel thick. A surface-like object consists of surfaces and curves crossing each other. Its curve skeleton is a 1D set centred within the surface-like object and with preserved topological properties. It can be useful to achieve a qualitative shape representation of the object with reduced dimensionality. The basic idea behind our algorithm is to detect the curves and the junctions between different surfaces and prevent their removal as they retain the most significant shape representation. 2002 Elsevier Science B.V. All rights reserved.",
"title": ""
},
{
"docid": "7ef40f6fb743ba331a9878ca8019bb7e",
"text": "Modern neural networks are often augmented with an attention mechanism, which tells the network where to focus within the input. We propose in this paper a new framework for sparse and structured attention, building upon a smoothed max operator. We show that the gradient of this operator defines a mapping from real values to probabilities, suitable as an attention mechanism. Our framework includes softmax and a slight generalization of the recently-proposed sparsemax as special cases. However, we also show how our framework can incorporate modern structured penalties, resulting in more interpretable attention mechanisms, that focus on entire segments or groups of an input. We derive efficient algorithms to compute the forward and backward passes of our attention mechanisms, enabling their use in a neural network trained with backpropagation. To showcase their potential as a drop-in replacement for existing ones, we evaluate our attention mechanisms on three large-scale tasks: textual entailment, machine translation, and sentence summarization. Our attention mechanisms improve interpretability without sacrificing performance; notably, on textual entailment and summarization, we outperform the standard attention mechanisms based on softmax and sparsemax.",
"title": ""
},
{
"docid": "d166f4cd01d22d7143487b691138023c",
"text": "Although Bitcoin is often perceived to be an anonymous currency, research has shown that a user’s Bitcoin transactions can be linked to compromise the user’s anonymity. We present solutions to the anonymity problem for both transactions on Bitcoin’s blockchain and off the blockchain (in so called micropayment channel networks). We use an untrusted third party to issue anonymous vouchers which users redeem for Bitcoin. Blind signatures and Bitcoin transaction contracts (aka smart contracts) ensure the anonymity and fairness during the bitcoin ↔ voucher exchange. Our schemes are practical, secure and anonymous.",
"title": ""
},
{
"docid": "98c3e7dd0c383e7cc934efa6113384ca",
"text": "In the accident of nuclear disasters or biochemical terrors, there is the strong need for robots which can move around and collect information at the disaster site. The robot should have toughness and high mobility in a location of stairs and obstacles. In this study, we propose a brand new type of mobile base named “crank-wheel” suitable for such use. Crank-wheel consists of wheels and connecting coupler link named crank-leg. Crank-wheel makes simple and smooth wheeled motion on flat ground and automatically transforms to the walking motion on rugged terrain as the crank-legs starts to contact the surface of the rugged terrain and acts as legs. This mechanism features its simple, easiness to maintain water and dust proof structure, and limited danger of biting rubbles in the driving mechanism just as the case of tracked vehicles. Effectiveness of the Crank-wheel is confirmed by several driving experiments on debris, sand and bog.",
"title": ""
},
{
"docid": "f472c6ee8382cfb508fbca29b1caade6",
"text": "Modern digital systems are severely constrained by both battery life and operating temperatures, resulting in strict limits on total power consumption and power density. To continue to scale digital throughput at constant power density, there is a need for increasing parallelism and dynamic voltage/bias scaling. This work presents an architecture and power converter implementation providing efficient power-delivery for microprocessors and other high-performance digital circuits stacked in vertical voltage domains. A multi-level DC-DC converter interfaces between a fixed DC voltage and multiple 0.7 V to 1.4 V voltage domains stacked in series. The converter implements dynamic voltage scaling (DVS) with multi-objective digital control implemented in an on-board (embedded) digital control system. We present measured results demonstrating functional multi-core DVS and performance with moderate load current steps. The converter demonstrates the use of a two-phase interleaved powertrain with coupled inductors to achieve voltage and current ripple reduction for the stacked ladder-converter architecture.",
"title": ""
},
{
"docid": "52a6319c28c6c889101d9b2b6d4a76d3",
"text": "A method is developed for imputing missing values when the probability of response depends upon the variable being imputed. The missing data problem is viewed as one of parameter estimation in a regression model with stochastic ensoring of the dependent variable. The prediction approach to imputation is used to solve this estimation problem. Wages and salaries are imputed to nonrespondents in the Current Population Survey and the results are compared to the nonrespondents' IRS wage and salary data. The stochastic ensoring approach gives improved results relative to a prediction approach that ignores the response mechanism.",
"title": ""
},
{
"docid": "6aaabe17947bc455d940047745ed7962",
"text": "In this paper, we want to study how natural and engineered systems could perform complex optimizations with limited computational and communication capabilities. We adopt a continuous-time dynamical system view rooted in early work on optimization and more recently in network protocol design, and merge it with the dynamic view of distributed averaging systems. We obtain a general approach, based on the control system viewpoint, that allows to analyze and design (distributed) optimization systems converging to the solution of given convex optimization problems. The control system viewpoint provides many insights and new directions of research. We apply the framework to a distributed optimal location problem and demonstrate the natural tracking and adaptation capabilities of the system to changing constraints.",
"title": ""
},
{
"docid": "d2b27ab3eb0aa572fdf8f8e3de6ae952",
"text": "Both industry and academia have extensively investigated hardware accelerations. To address the demands in increasing computational capability and memory requirement, in this work, we propose the structured weight matrices (SWM)-based compression technique for both Field Programmable Gate Array (FPGA) and application-specific integrated circuit (ASIC) implementations. In the algorithm part, the SWM-based framework adopts block-circulant matrices to achieve a fine-grained tradeoff between accuracy and compression ratio. The SWM-based technique can reduce computational complexity from O(n2) to O(nlog n) and storage complexity from O(n2) to O(n) for each layer and both training and inference phases. For FPGA implementations on deep convolutional neural networks (DCNNs), we achieve at least 152X and 72X improvement in performance and energy efficiency, respectively using the SWM-based framework, compared with the baseline of IBM TrueNorth processor under same accuracy constraints using the data set of MNIST, SVHN, and CIFAR-10. For FPGA implementations on long short term memory (LSTM) networks, the proposed SWM-based LSTM can achieve up to 21X enhancement in performance and 33.5X gains in energy efficiency compared with the ESE accelerator. For ASIC implementations, the proposed SWM-based ASIC design exhibits impressive advantages in terms of power, throughput, and energy efficiency. Experimental results indicate that this method is greatly suitable for applying DNNs onto both FPGAs and mobile/IoT devices.",
"title": ""
},
{
"docid": "db02af0f6c2994e4348c1f7c4f3191ce",
"text": "American students rank well below international peers in the disciplines of science, technology, engineering, and mathematics (STEM). Early exposure to STEM-related concepts is critical to later academic achievement. Given the rise of tablet-computer use in early childhood education settings, interactive technology might be one particularly fruitful way of supplementing early STEM education. Using a between-subjects experimental design, we sought to determine whether preschoolers could learn a fundamental math concept (i.e., measurement with non-standard units) from educational technology, and whether interactivity is a crucial component of learning from that technology. Participants who either played an interactive tablet-based game or viewed a non-interactive video demonstrated greater transfer of knowledge than those assigned to a control condition. Interestingly, interactivity contributed to better performance on near transfer tasks, while participants in the non-interactive condition performed better on far transfer tasks. Our findings suggest that, while preschool-aged children can learn early STEM skills from educational technology, interactivity may only further support learning in certain",
"title": ""
},
{
"docid": "9db779a5a77ac483bb1991060dca7c28",
"text": "An Ambient Intelligence (AmI) environment is primary developed using intelligent agents and wireless sensor networks. The intelligent agents could automatically obtain contextual information in real time using Near Field Communication (NFC) technique and wireless ad-hoc networks. In this research, we propose a stock trading and recommendation system with mobile devices (Android platform) interface in the over-the-counter market (OTC) environments. The proposed system could obtain the real-time financial information of stock price through a multi-agent architecture with plenty of useful features. In addition, NFC is used to achieve a context-aware environment allowing for automatic acquisition and transmission of useful trading recommendations and relevant stock information for investors. Finally, AmI techniques are applied to successfully create smart investment spaces, providing investors with useful monitoring tools and investment recommendation.",
"title": ""
},
{
"docid": "c159f32bda951cf15a886ff27b4aef8c",
"text": "We consider the problem of using image queries to retrieve videos from a database. Our focus is on large-scale applications, where it is infeasible to index each database video frame independently. Our main contribution is a framework based on Bloom filters, which can be used to index long video segments, enabling efficient image-to-video comparisons. Using this framework, we investigate several retrieval architectures, by considering different types of aggregation and different functions to encode visual information – these play a crucial role in achieving high performance. Extensive experiments show that the proposed technique improves mean average precision by 24% on a public dataset, while being 4× faster, compared to the previous state-of-the-art.",
"title": ""
},
{
"docid": "bf7cd2303c325968879da72966054427",
"text": "Object detection methods fall into two categories, i.e., two-stage and single-stage detectors. The former is characterized by high detection accuracy while the latter usually has considerable inference speed. Hence, it is imperative to fuse their metrics for a better accuracy vs. speed trade-off. To this end, we propose a dual refinement network (DRN) to boost the performance of the single-stage detector. Inheriting from the advantages of two-stage approaches (i.e., two-step regression and accurate features for detection), anchor refinement and feature offset refinement are conducted in anchor-offset detection, where the detection head is comprised of deformable convolutions. Moreover, to leverage contextual information for describing objects, we design a multi-deformable head, in which multiple detection paths with different receptive field sizes devote themselves to detecting objects. Extensive experiments on PASCAL VOC and ImageNet VID datasets are conducted, and we achieve the state-of-the-art results and a better accuracy vs. speed trade-off, i.e., 81.4% mAP vs. 42.3 FPS on VOC2007 test set. Codes will be publicly available.",
"title": ""
},
{
"docid": "7095bf529a060dd0cd7eeb2910998cf8",
"text": "The proliferation of internet along with the attractiveness of the web in recent years has made web mining as the research area of great magnitude. Web mining essentially has many advantages which makes this technology attractive to researchers. The analysis of web user’s navigational pattern within a web site can provide useful information for applications like, server performance enhancements, restructuring a web site, direct marketing in ecommerce etc. The navigation paths may be explored based on some similarity criteria, in order to get the useful inference about the usage of web. The objective of this paper is to propose an effective clustering technique to group users’ sessions by modifying K-means algorithm and suggest a method to compute the distance between sessions based on similarity of their web access path, which takes care of the issue of the user sessions that are of variable",
"title": ""
},
{
"docid": "47d7ba349d6b1d2f1024e8eed003b40b",
"text": "Although motion blur and rolling shutter deformations are closely coupled artifacts in images taken with CMOS image sensors, the two phenomena have so far mostly been treated separately, with deblurring algorithms being unable to handle rolling shutter wobble, and rolling shutter algorithms being incapable of dealing with motion blur. We propose an approach that delivers sharp and undistorted output given a single rolling shutter motion blurred image. The key to achieving this is a global modeling of the camera motion trajectory, which enables each scanline of the image to be deblurred with the corresponding motion segment. We show the results of the proposed framework through experiments on synthetic and real data.",
"title": ""
},
{
"docid": "b94d146408340ce2a89b95f1b47e91f6",
"text": "In order to improve the life quality of amputees, providing approximate manipulation ability of a human hand to that of a prosthetic hand is considered by many researchers. In this study, a biomechanical model of the index finger of the human hand is developed based on the human anatomy. Since the activation of finger bones are carried out by tendons, a tendon configuration of the index finger is introduced and used in the model to imitate the human hand characteristics and functionality. Then, fuzzy sliding mode control where the slope of the sliding surface is tuned by a fuzzy logic unit is proposed and applied to have the finger model to follow a certain trajectory. The trajectory of the finger model, which mimics the motion characteristics of the human hand, is pre-determined from the camera images of a real hand during closing and opening motion. Also, in order to check the robust behaviour of the controller, an unexpected joint friction is induced on the prosthetic finger on its way. Finally, the resultant prosthetic finger motion and the tendon forces produced are given and results are discussed.",
"title": ""
},
{
"docid": "d1d14d5f16b4a32576e9a6c43e75138f",
"text": "6 1 and cost of the product. Not all materials can be scaled-up with the same mixing process. Frequently, scaling-up the mixing process from small research batches to large quantities, necessary for production, can lead to unexpected problems. This reference book is intended to help the reader both identify and solve mixing problems. It is a comprehensive handbook that provides excellent coverage on the fundamentals, design, and applications of current mixing technology in general. Although this book includes many technology areas, one of main areas of interest to our readers would be in the polymer processing area. This would include the first eight chapters in the book and a specific application chapter on polymer processing. These cover the fundamentals of mixing technology, important to polymer processing, including residence time distributions and laminar mixing techniques. In the experimental section of the book, some of the relevant tools and techniques cover flow visualization technologies, lab scale mixing, flow and torque measurements, CFD coding, and numerical methods. There is a good overview of various types of mixers used for polymer processing in a dedicated applications chapter on mixing high viscosity materials such as polymers. There are many details given on the differences between the mixing blades in various types of high viscosity mixers and suggestions for choosing the proper mixer for high viscosity applications. The majority of the book does, however, focus on the chemical, petroleum, and pharmaceutical industries that generally process materials with much lower viscosity than polymers. The reader interested in learning about the fundamentals of mixing in general as well as some specifics on polymer processing would find this book to be a useful reference.",
"title": ""
},
{
"docid": "e9676faf7e8d03c64fdcf6aa5e09b008",
"text": "In this paper, a novel subspace method called diagonal principal component analysis (DiaPCA) is proposed for face recognition. In contrast to standard PCA, DiaPCA directly seeks the optimal projective vectors from diagonal face images without image-to-vector transformation. While in contrast to 2DPCA, DiaPCA reserves the correlations between variations of rows and those of columns of images. Experiments show that DiaPCA is much more accurate than both PCA and 2DPCA. Furthermore, it is shown that the accuracy can be further improved by combining DiaPCA with 2DPCA.",
"title": ""
},
{
"docid": "f3e38f283156ce65d8cfa937a55f9d0f",
"text": "A novel multi-objective evolutionary algorithm (MOEA) is developed based on Imperialist Competitive Algorithm (ICA), a newly introduced evolutionary algorithm (EA). Fast non-dominated sorting and the Sigma method are employed for ranking the solutions. The algorithm is tested on six well-known test functions each of them incorporate a particular feature that may cause difficulty to MOEAs. The numerical results indicate that MOICA shows significantly higher efficiency in terms of accuracy and maintaining a diverse population of solutions when compared to the existing salient MOEAs, namely fast elitism non-dominated sorting genetic algorithm (NSGA-II) and multi-objective particle swarm optimization (MOPSO). Considering the computational time, the proposed algorithm is slightly faster than MOPSO and significantly outperforms NSGA-II. KEYWORD Multi-objective Imperialist Competitive Algorithm, Multi-objective optimization, Pareto front.",
"title": ""
},
{
"docid": "fee50f8ab87f2b97b83ca4ef92f57410",
"text": "Ontologies now play an important role for many knowledge-intensive applications for which they provide a source of precisely defined terms. However, with their wide-spread usage there come problems concerning their proliferation. Ontology engineers or users frequently have a core ontology that they use, e.g., for browsing or querying data, but they need to extend it with, adapt it to, or compare it with the large set of other ontologies. For the task of detecting and retrieving relevant ontologies, one needs means for measuring the similarity between ontologies. We present a set of ontology similarity measures and a multiple-phase empirical evaluation.",
"title": ""
}
] | scidocsrr |
1089ad0b6e4711d848b904c08ad9bc56 | THE FAILURE OF E-GOVERNMENT IN DEVELOPING COUNTRIES: A LITERATURE REVIEW | [
{
"docid": "310aa30e2dd2b71c09780f7984a3663c",
"text": "E-governance is more than just a government website on the Internet. The strategic objective of e-governance is to support and simplify governance for all parties; government, citizens and businesses. The use of ICTs can connect all three parties and support processes and activities. In other words, in e-governance electronic means support and stimulate good governance. Therefore, the objectives of e-governance are similar to the objectives of good governance. Good governance can be seen as an exercise of economic, political, and administrative authority to better manage affairs of a country at all levels. It is not difficult for people in developed countries to imagine a situation in which all interaction with government can be done through one counter 24 hours a day, 7 days a week, without waiting in lines. However to achieve this same level of efficiency and flexibility for developing countries is going to be difficult. The experience in developed countries shows that this is possible if governments are willing to decentralize responsibilities and processes, and if they start to use electronic means. This paper is going to examine the legal and infrastructure issues related to e-governance from the perspective of developing countries. Particularly it will examine how far the developing countries have been successful in providing a legal framework.",
"title": ""
}
] | [
{
"docid": "70242cb6aee415682c03da6bfd033845",
"text": "This paper presents a class of linear predictors for nonlinear controlled dynamical systems. The basic idea is to lift (or embed) the nonlinear dynamics into a higher dimensional space where its evolution is approximately linear. In an uncontrolled setting, this procedure amounts to numerical approximations of the Koopman operator associated to the nonlinear dynamics. In this work, we extend the Koopman operator to controlled dynamical systems and apply the Extended Dynamic Mode Decomposition (EDMD) to compute a finite-dimensional approximation of the operator in such a way that this approximation has the form of a linear controlled dynamical system. In numerical examples, the linear predictors obtained in this way exhibit a performance superior to existing linear predictors such as those based on local linearization or the so called Carleman linearization. Importantly, the procedure to construct these linear predictors is completely data-driven and extremely simple – it boils down to a nonlinear transformation of the data (the lifting) and a linear least squares problem in the lifted space that can be readily solved for large data sets. These linear predictors can be readily used to design controllers for the nonlinear dynamical system using linear controller design methodologies. We focus in particular on model predictive control (MPC) and show that MPC controllers designed in this way enjoy computational complexity of the underlying optimization problem comparable to that of MPC for a linear dynamical system with the same number of control inputs and the same dimension of the state-space. Importantly, linear inequality constraints on the state and control inputs as well as nonlinear constraints on the state can be imposed in a linear fashion in the proposed MPC scheme. Similarly, cost functions nonlinear in the state variable can be handled in a linear fashion. We treat both the full-state measurement case and the input-output case, as well as systems with disturbances / noise. Numerical examples (including a high-dimensional nonlinear PDE control) demonstrate the approach with the source code available online2.",
"title": ""
},
{
"docid": "ced13f6c3e904f5bd833e2f2621ae5e2",
"text": "A growing amount of research focuses on learning in group settings and more specifically on learning in computersupported collaborative learning (CSCL) settings. Studies on western students indicate that online collaboration enhances student learning achievement; however, few empirical studies have examined student satisfaction, performance, and knowledge construction through online collaboration from a cross-cultural perspective. This study examines satisfaction, performance, and knowledge construction via online group discussions of students in two different cultural contexts. Students were both first-year university students majoring in educational sciences at a Flemish university and a Chinese university. Differences and similarities of the two groups of students with regard to satisfaction, learning process, and achievement were analyzed.",
"title": ""
},
{
"docid": "3a0da20211697fbcce3493aff795556c",
"text": "OBJECTIVES\nWe studied whether park size, number of features in the park, and distance to a park from participants' homes were related to a park being used for physical activity.\n\n\nMETHODS\nWe collected observational data on 28 specific features from 33 parks. Adult residents in surrounding areas (n=380) completed 7-day physical activity logs that included the location of their activities. We used logistic regression to examine the relative importance of park size, features, and distance to participants' homes in predicting whether a park was used for physical activity, with control for perceived neighborhood safety and aesthetics.\n\n\nRESULTS\nParks with more features were more likely to be used for physical activity; size and distance were not significant predictors. Park facilities were more important than were park amenities. Of the park facilities, trails had the strongest relationship with park use for physical activity.\n\n\nCONCLUSIONS\nSpecific park features may have significant implications for park-based physical activity. Future research should explore these factors in diverse neighborhoods and diverse parks among both younger and older populations.",
"title": ""
},
{
"docid": "102ed07783d46a8ebadcad4b30ccb3c8",
"text": "Ongoing innovations in recurrent neural network architectures have provided a steady influx of apparently state-of-the-art results on language modelling benchmarks. However, these have been evaluated using differing codebases and limited computational resources, which represent uncontrolled sources of experimental variation. We reevaluate several popular architectures and regularisation methods with large-scale automatic black-box hyperparameter tuning and arrive at the somewhat surprising conclusion that standard LSTM architectures, when properly regularised, outperform more recent models. We establish a new state of the art on the Penn Treebank and Wikitext-2 corpora, as well as strong baselines on the Hutter Prize dataset.",
"title": ""
},
{
"docid": "99206cfadd7aeb90f4cebaa1edebc0e1",
"text": "An energy-efficient gait planning (EEGP) and control system is established for biped robots with three-mass inverted pendulum mode (3MIPM), which utilizes both vertical body motion (VBM) and allowable zero-moment-point (ZMP) region (AZR). Given a distance to be traveled, we newly designed an online gait synthesis algorithm to construct a complete walking cycle, i.e., a starting step, multiple cyclic steps, and a stopping step, in which: 1) ZMP was fully manipulated within AZR; and 2) vertical body movement was allowed to relieve knee bending. Moreover, gait parameter optimization is effectively performed to determine the optimal set of gait parameters, i.e., average body height and amplitude of VBM, number of steps, and average walking speed, which minimizes energy consumption of actuation motors for leg joints under practical constraints, i.e., geometrical constraints, friction force limit, and yawing moment limit. Various simulations were conducted to identify the effectiveness of the proposed method and verify energy-saving performance for various ZMP regions. Our control system was implemented and tested on the humanoid robot DARwIn-OP.",
"title": ""
},
{
"docid": "9fc2d92c42400a45cb7bf6c998dc9236",
"text": "This paper presents a new probabilistic model of information retrieval. The most important modeling assumption made is that documents and queries are defined by an ordered sequence of single terms. This assumption is not made in well-known existing models of information retrieval, but is essential in the field of statistical natural language processing. Advances already made in statistical natural language processing will be used in this paper to formulate a probabilistic justification for using tf×idf term weighting. The paper shows that the new probabilistic interpretation of tf×idf term weighting might lead to better understanding of statistical ranking mechanisms, for example by explaining how they relate to coordination level ranking. A pilot experiment on the TREC collection shows that the linguistically motivated weighting algorithm outperforms the popular BM25 weighting algorithm.",
"title": ""
},
{
"docid": "c1ba049befffa94e358555056df15cc2",
"text": "People design what they say specifically for their conversational partners, and they adapt to their partners over the course of a conversation. A comparison of keyboard conversations involving a simulated computer partner (as in a natural language interface) with those involving a human partner (as in teleconferencing) yielded striking differences and some equally striking similarities. For instance, there were significantly fewer acknowledgments in human/computer dialogue than in human/human. However, regardless of the conversational partner, people expected connectedness across conversational turns. In addition, the style of a partner's response shaped what people subsequently typed. These results suggest some issues that need to be addressed before a natural language computer interface will be able to hold up its end of a conversation.",
"title": ""
},
{
"docid": "277bdeccc25baa31ba222ff80a341ef2",
"text": "Teaching by examples and cases is widely used to promote learning, but it varies widely in its effectiveness. The authors test an adaptation to case-based learning that facilitates abstracting problemsolving schemas from examples and using them to solve further problems: analogical encoding, or learning by drawing a comparison across examples. In 3 studies, the authors examined schema abstraction and transfer among novices learning negotiation strategies. Experiment 1 showed a benefit for analogical learning relative to no case study. Experiment 2 showed a marked advantage for comparing two cases over studying the 2 cases separately. Experiment 3 showed that increasing the degree of comparison support increased the rate of transfer in a face-to-face dynamic negotiation exercise.",
"title": ""
},
{
"docid": "a0c6b1817a08d1be63dff9664852a6b4",
"text": "Despite years of HCI research on digital technology in museums, it is still unclear how different interactions impact on visitors'. A comparative evaluation of smart replicas, phone app and smart cards looked at the personal preferences, behavioural change, and the appeal of mobiles in museums. 76 participants used all three interaction modes and gave their opinions in a questionnaire; participants interaction was also observed. The results show the phone is the most disliked interaction mode while tangible interaction (smart card and replica combined) is the most liked. Preference for the phone favour mobility to the detriment of engagement with the exhibition. Different behaviours when interacting with the phone or the tangibles where observed. The personal visiting style appeared to be only marginally affected by the device. Visitors also expect museums to provide the phones against the current trend of developing apps in a \"bring your own device\" approach.",
"title": ""
},
{
"docid": "d9df98fbd7281b67347df0f2643323fa",
"text": "Predefined categories can be assigned to the natural language text using for text classification. It is a “bag-of-word” representation, previous documents have a word with values, it represents how frequently this word appears in the document or not. But large documents may face many problems because they have irrelevant or abundant information is there. This paper explores the effect of other types of values, which express the distribution of a word in the document. These values are called distributional features. All features are calculated by tfidf style equation and these features are combined with machine learning techniques. Term frequency is one of the major factor for distributional features it holds weighted item set. When the need is to minimize a certain score function, discovering rare data correlations is more interesting than mining frequent ones. This paper tackles the issue of discovering rare and weighted item sets, i.e., the infrequent weighted item set mining problem. The classifier which gives the more accurate result is selected for categorization. Experiments show that the distributional features are useful for text categorization.",
"title": ""
},
{
"docid": "46f646c82f30eae98142c83045176353",
"text": "In this article, the authors present a psychodynamically oriented psychotherapy approach for posttraumatic stress disorder (PTSD) related to childhood abuse. This neurobiologically informed, phase-oriented treatment approach, which has been developed in Germany during the past 20 years, takes into account the broad comorbidity and the large degree of ego-function impairment typically found in these patients. Based on a psychodynamic relationship orientation, this treatment integrates a variety of trauma-specific imaginative and resource-oriented techniques. The approach places major emphasis on the prevention of vicarious traumatization. The authors are presently planning to test the approach in a randomized controlled trial aimed at strengthening the evidence base for psychodynamic psychotherapy in PTSD.",
"title": ""
},
{
"docid": "87c793be992e5d25c8422011bd52be12",
"text": "A major challenge in real-world feature matching problems is to tolerate the numerous outliers arising in typical visual tasks. Variations in object appearance, shape, and structure within the same object class make it harder to distinguish inliers from outliers due to clutters. In this paper, we propose a max-pooling approach to graph matching, which is not only resilient to deformations but also remarkably tolerant to outliers. The proposed algorithm evaluates each candidate match using its most promising neighbors, and gradually propagates the corresponding scores to update the neighbors. As final output, it assigns a reliable score to each match together with its supporting neighbors, thus providing contextual information for further verification. We demonstrate the robustness and utility of our method with synthetic and real image experiments.",
"title": ""
},
{
"docid": "d7108ba99aaa9231d926a52617baa712",
"text": "In this paper, an ultra-compact single-chip solar energy harvesting IC using on-chip solar cell for biomedical implant applications is presented. By employing an on-chip charge pump with parallel connected photodiodes, a 3.5 <inline-formula> <tex-math notation=\"LaTeX\">$\\times$</tex-math></inline-formula> efficiency improvement can be achieved when compared with the conventional stacked photodiode approach to boost the harvested voltage while preserving a single-chip solution. A photodiode-assisted dual startup circuit (PDSC) is also proposed to improve the area efficiency and increase the startup speed by 77%. By employing an auxiliary charge pump (AQP) using zero threshold voltage (ZVT) devices in parallel with the main charge pump, a low startup voltage of 0.25 V is obtained while minimizing the reversion loss. A <inline-formula> <tex-math notation=\"LaTeX\">$4\\, {\\mathbf{V}}_{\\mathbf{in}}$</tex-math></inline-formula> gate drive voltage is utilized to reduce the conduction loss. Systematic charge pump and solar cell area optimization is also introduced to improve the energy harvesting efficiency. The proposed system is implemented in a standard 0.18- <inline-formula> <tex-math notation=\"LaTeX\">$\\mu\\text{m}$</tex-math></inline-formula> CMOS technology and occupies an active area of 1.54 <inline-formula> <tex-math notation=\"LaTeX\">$\\text{mm}^{2}$</tex-math></inline-formula>. Measurement results show that the on-chip charge pump can achieve a maximum efficiency of 67%. With an incident power of 1.22 <inline-formula> <tex-math notation=\"LaTeX\">$\\text{mW/cm}^{2}$</tex-math></inline-formula> from a halogen light source, the proposed energy harvesting IC can deliver an output power of 1.65 <inline-formula> <tex-math notation=\"LaTeX\">$\\mu\\text{W}$</tex-math></inline-formula> at 64% charge pump efficiency. The chip prototype is also verified using <italic>in-vitro</italic> experiment.",
"title": ""
},
{
"docid": "e3eae34f1ad48264f5b5913a65bf1247",
"text": "Double spending and blockchain forks are two main issues that the Bitcoin crypto-system is confronted with. The former refers to an adversary's ability to use the very same coin more than once while the latter reflects the occurrence of transient inconsistencies in the history of the blockchain distributed data structure. We present a new approach to tackle these issues: it consists in adding some local synchronization constraints on Bitcoin's validation operations, and in making these constraints independent from the native blockchain protocol. Synchronization constraints are handled by nodes which are randomly and dynamically chosen in the Bitcoin system. We show that with such an approach, content of the blockchain is consistent with all validated transactions and blocks which guarantees the absence of both double-spending attacks and blockchain forks.",
"title": ""
},
{
"docid": "fb2287cb1c41441049288335f10fd473",
"text": "One of the important types of information on the Web is the opinions expressed in the user generated content, e.g., customer reviews of products, forum posts, and blogs. In this paper, we focus on customer reviews of products. In particular, we study the problem of determining the semantic orientations (positive, negative or neutral) of opinions expressed on product features in reviews. This problem has many applications, e.g., opinion mining, summarization and search. Most existing techniques utilize a list of opinion (bearing) words (also called opinion lexicon) for the purpose. Opinion words are words that express desirable (e.g., great, amazing, etc.) or undesirable (e.g., bad, poor, etc) states. These approaches, however, all have some major shortcomings. In this paper, we propose a holistic lexicon-based approach to solving the problem by exploiting external evidences and linguistic conventions of natural language expressions. This approach allows the system to handle opinion words that are context dependent, which cause major difficulties for existing algorithms. It also deals with many special words, phrases and language constructs which have impacts on opinions based on their linguistic patterns. It also has an effective function for aggregating multiple conflicting opinion words in a sentence. A system, called Opinion Observer, based on the proposed technique has been implemented. Experimental results using a benchmark product review data set and some additional reviews show that the proposed technique is highly effective. It outperforms existing methods significantly",
"title": ""
},
{
"docid": "d40aa76e76c44da4c6237f654dcdab45",
"text": "The flipped classroom pedagogy has achieved significant mention in academic circles in recent years. \"Flipping\" involves the reinvention of a traditional course so that students engage with learning materials via recorded lectures and interactive exercises prior to attending class and then use class time for more interactive activities. Proper implementation of a flipped classroom is difficult to gauge, but combines successful techniques for distance education with constructivist learning theory in the classroom. While flipped classrooms are not a novel concept, technological advances and increased comfort with distance learning have made the tools to produce and consume course materials more pervasive. Flipped classroom experiments have had both positive and less-positive results and are generally measured by a significant improvement in learning outcomes. This study, however, analyzes the opinions of students in a flipped sophomore-level information technology course by using a combination of surveys and reflective statements. The author demonstrates that at the outset students are new - and somewhat receptive - to the concept of the flipped classroom. By the conclusion of the course satisfaction with the pedagogy is significant. Finally, student feedback is provided in an effort to inform instructors in the development of their own flipped classrooms.",
"title": ""
},
{
"docid": "5838d6a17e2223c6421da33d5985edd1",
"text": "In this article, I provide commentary on the Rudd et al. (2009) article advocating thorough informed consent with suicidal clients. I examine the Rudd et al. recommendations in light of their previous empirical-research and clinical-practice articles on suicidality, and from the perspective of clinical practice with suicidal clients in university counseling center settings. I conclude that thorough informed consent is a clinical intervention that is still in preliminary stages of development, necessitating empirical research and clinical training before actual implementation as an ethical clinical intervention. (PsycINFO Database Record (c) 2010 APA, all rights reserved).",
"title": ""
},
{
"docid": "a4cfe72cae5bdaed110299d652e60a6f",
"text": "Hoffa's (infrapatellar) fat pad (HFP) is one of the knee fat pads interposed between the joint capsule and the synovium. Located posterior to patellar tendon and anterior to the capsule, the HFP is richly innervated and, therefore, one of the sources of anterior knee pain. Repetitive local microtraumas, impingement, and surgery causing local bleeding and inflammation are the most frequent causes of HFP pain and can lead to a variety of arthrofibrotic lesions. In addition, the HFP may be secondarily involved to menisci and ligaments disorders, injuries of the patellar tendon and synovial disorders. Patients with oedema or abnormalities of the HFP on magnetic resonance imaging (MRI) are often symptomatic; however, these changes can also be seen in asymptomatic patients. Radiologists should be cautious in emphasising abnormalities of HFP since they do not always cause pain and/or difficulty in walking and, therefore, do not require therapy. Teaching Points • Hoffa's fat pad (HFP) is richly innervated and, therefore, a source of anterior knee pain. • HFP disorders are related to traumas, involvement from adjacent disorders and masses. • Patients with abnormalities of the HFP on MRI are often but not always symptomatic. • Radiologists should be cautious in emphasising abnormalities of HFP.",
"title": ""
},
{
"docid": "4ae82b3362756b0efed84596076ea6fb",
"text": "Smart grids equipped with bi-directional communication flow are expected to provide more sophisticated consumption monitoring and energy trading. However, the issues related to the security and privacy of consumption and trading data present serious challenges. In this paper we address the problem of providing transaction security in decentralized smart grid energy trading without reliance on trusted third parties. We have implemented a proof-of-concept for decentralized energy trading system using blockchain technology, multi-signatures, and anonymous encrypted messaging streams, enabling peers to anonymously negotiate energy prices and securely perform trading transactions. We conducted case studies to perform security analysis and performance evaluation within the context of the elicited security and privacy requirements.",
"title": ""
}
] | scidocsrr |
63d340f89dd18d1873c3bdaf4de2f732 | DSGAN: Generative Adversarial Training for Distant Supervision Relation Extraction | [
{
"docid": "3ca057959a24245764953a6aa1b2ed84",
"text": "Distant supervision for relation extraction is an efficient method to scale relation extraction to very large corpora which contains thousands of relations. However, the existing approaches have flaws on selecting valid instances and lack of background knowledge about the entities. In this paper, we propose a sentence-level attention model to select the valid instances, which makes full use of the supervision information from knowledge bases. And we extract entity descriptions from Freebase and Wikipedia pages to supplement background knowledge for our task. The background knowledge not only provides more information for predicting relations, but also brings better entity representations for the attention module. We conduct three experiments on a widely used dataset and the experimental results show that our approach outperforms all the baseline systems significantly.",
"title": ""
},
{
"docid": "3388d2e88fdc2db9967da4ddb452d9f1",
"text": "Entity pair provide essential information for identifying relation type. Aiming at this characteristic, Position Feature is widely used in current relation classification systems to highlight the words close to them. However, semantic knowledge involved in entity pair has not been fully utilized. To overcome this issue, we propose an Entity-pair-based Attention Mechanism, which is specially designed for relation classification. Recently, attention mechanism significantly promotes the development of deep learning in NLP. Inspired by this, for specific instance(entity pair, sentence), the corresponding entity pair information is incorporated as prior knowledge to adaptively compute attention weights for generating sentence representation. Experimental results on SemEval-2010 Task 8 dataset show that our method outperforms most of the state-of-the-art models, without external linguistic features.",
"title": ""
},
{
"docid": "c1943f443b0e7be72091250b34262a8f",
"text": "We survey recent approaches to noise reduction in distant supervision learning for relation extraction. We group them according to the principles they are based on: at-least-one constraints, topic-based models, or pattern correlations. Besides describing them, we illustrate the fundamental differences and attempt to give an outlook to potentially fruitful further research. In addition, we identify related work in sentiment analysis which could profit from approaches to noise reduction.",
"title": ""
},
{
"docid": "9c44aba7a9802f1fe95fbeb712c23759",
"text": "In relation extraction, distant supervision seeks to extract relations between entities from text by using a knowledge base, such as Freebase, as a source of supervision. When a sentence and a knowledge base refer to the same entity pair, this approach heuristically labels the sentence with the corresponding relation in the knowledge base. However, this heuristic can fail with the result that some sentences are labeled wrongly. This noisy labeled data causes poor extraction performance. In this paper, we propose a method to reduce the number of wrong labels. We present a novel generative model that directly models the heuristic labeling process of distant supervision. The model predicts whether assigned labels are correct or wrong via its hidden variables. Our experimental results show that this model detected wrong labels with higher performance than baseline methods. In the experiment, we also found that our wrong label reduction boosted the performance of relation extraction.",
"title": ""
}
] | [
{
"docid": "dea7d83ed497fc95f4948a5aa4787b18",
"text": "The distinguishing feature of the Fog Computing (FC) paradigm is that FC spreads communication and computing resources over the wireless access network, so as to provide resource augmentation to resource and energy-limited wireless (possibly mobile) devices. Since FC would lead to substantial reductions in energy consumption and access latency, it will play a key role in the realization of the Fog of Everything (FoE) paradigm. The core challenge of the resulting FoE paradigm is tomaterialize the seamless convergence of three distinct disciplines, namely, broadband mobile communication, cloud computing, and Internet of Everything (IoE). In this paper, we present a new IoE architecture for FC in order to implement the resulting FoE technological platform. Then, we elaborate the related Quality of Service (QoS) requirements to be satisfied by the underlying FoE technological platform. Furthermore, in order to corroborate the conclusion that advancements in the envisioned architecture description, we present: (i) the proposed energy-aware algorithm adopt Fog data center; and, (ii) the obtained numerical performance, for a real-world case study that shows that our approach saves energy consumption impressively in theFog data Center compared with the existing methods and could be of practical interest in the incoming Fog of Everything (FoE) realm.",
"title": ""
},
{
"docid": "a73df97081ec01929e06969c52775007",
"text": "Massive graphs arise naturally in a lot of applications, especially in communication networks like the internet. The size of these graphs makes it very hard or even impossible to store set of edges in the main memory. Thus, random access to the edges can't be realized, which makes most o ine algorithms unusable. This essay investigates e cient algorithms that read the edges only in a xed sequential order. Since even basic graph problems often need at least linear space in the number of vetices to be solved, the storage space bounds are relaxed compared to the classic streaming model, such that the bound is O(n · polylog n). The essay describes algorithms for approximations of the unweighted and weighted matching problem and gives a o(log1− n) lower bound for approximations of the diameter. Finally, some results for further graph problems are discussed.",
"title": ""
},
{
"docid": "6ae739344034410a570b12a57db426e3",
"text": "In recent times we tend to use a number of surveillance systems for monitoring the targeted area. This requires an enormous amount of storage space along with a lot of human power in order to implement and monitor the area under surveillance. This is supposed to be costly and not a reliable process. In this paper we propose an intelligent surveillance system that continuously monitors the targeted area and detects motion in each and every frame. If the system detects motion in the targeted area then a notification is automatically sent to the user by sms and the video starts getting recorded till the motion is stopped. Using this method the required memory space for storing the video is reduced since it doesn't store the entire video but stores the video only when a motion is detected. This is achieved by using real time video processing using open CV (computer vision / machine vision) technology and raspberry pi system.",
"title": ""
},
{
"docid": "2c5ab4dddbb6aeae4542b42f57e54d72",
"text": "Online action detection is a challenging problem: a system needs to decide what action is happening at the current frame, based on previous frames only. Fortunately in real-life, human actions are not independent from one another: there are strong (long-term) dependencies between them. An online action detection method should be able to capture these dependencies, to enable a more accurate early detection. At first sight, an LSTM seems very suitable for this problem. It is able to model both short-term and long-term patterns. It takes its input one frame at the time, updates its internal state and gives as output the current class probabilities. In practice, however, the detection results obtained with LSTMs are still quite low. In this work, we start from the hypothesis that it may be too difficult for an LSTM to learn both the interpretation of the input and the temporal patterns at the same time. We propose a two-stream feedback network, where one stream processes the input and the other models the temporal relations. We show improved detection accuracy on an artificial toy dataset and on the Breakfast Dataset [21] and the TVSeries Dataset [7], reallife datasets with inherent temporal dependencies between the actions.",
"title": ""
},
{
"docid": "51e307584d6446ba2154676d02d2cc84",
"text": "This article provides a tutorial overview of cognitive architectures that can form a theoretical foundation for designing multimedia instruction. Cognitive architectures include a description of memory stores, memory codes, and cognitive operations. Architectures that are relevant to multimedia learning include Paivio’s dual coding theory, Baddeley’s working memory model, Engelkamp’s multimodal theory, Sweller’s cognitive load theory, Mayer’s multimedia learning theory, and Nathan’s ANIMATE theory. The discussion emphasizes the interplay between traditional research studies and instructional applications of this research for increasing recall, reducing interference, minimizing cognitive load, and enhancing understanding. Tentative conclusions are that (a) there is general agreement among the different architectures, which differ in focus; (b) learners’ integration of multiple codes is underspecified in the models; (c) animated instruction is not required when mental simulations are sufficient; (d) actions must be meaningful to be successful; and (e) multimodal instruction is superior to targeting modality-specific individual differences.",
"title": ""
},
{
"docid": "de48b60276b27861d58aaaf501606d69",
"text": "Many environmental variables that are important for the development of chironomid larvae (such as water temperature, oxygen availability, and food quantity) are related to water depth, and a statistically strong relationship between chironomid distribution and water depth is therefore expected. This study focuses on the distribution of fossil chironomids in seven shallow lakes and one deep lake from the Plymouth Aquifer (Massachusetts, USA) and aims to assess the influence of water depth on chironomid assemblages within a lake. Multiple samples were taken per lake in order to study the distribution of fossil chironomid head capsules within a lake. Within each lake, the chironomid assemblages are diverse and the changes that are seen in the assemblages are strongly related to changes in water depth. Several thresholds (i.e., where species turnover abruptly changes) are identified in the assemblages, and most lakes show abrupt changes at about 1–2 and 5–7 m water depth. In the deep lake, changes also occur at 9.6 and 15 m depth. The distribution of many individual taxa is significantly correlated to water depth, and we show that the identification of different taxa within the genus Tanytarsus is important because different morphotypes show different responses to water depth. We conclude that the chironomid fauna is sensitive to changes in lake level, indicating that fossil chironomid assemblages can be used as a tool for quantitative reconstruction of lake level changes.",
"title": ""
},
{
"docid": "5495aeaa072a1f8f696298ebc7432045",
"text": "Deep neural networks (DNNs) are widely used in data analytics, since they deliver state-of-the-art accuracies. Binarized neural networks (BNNs) are recently proposed optimized variant of DNNs. BNNs constraint network weight and/or neuron value to either +1 or −1, which is representable in 1 bit. This leads to dramatic algorithm efficiency improvement, due to reduction in the memory and computational demands. This paper evaluates the opportunity to further improve the execution efficiency of BNNs through hardware acceleration. We first proposed a BNN hardware accelerator design. Then, we implemented the proposed accelerator on Aria 10 FPGA as well as 14-nm ASIC, and compared them against optimized software on Xeon server CPU, Nvidia Titan X server GPU, and Nvidia TX1 mobile GPU. Our evaluation shows that FPGA provides superior efficiency over CPU and GPU. Even though CPU and GPU offer high peak theoretical performance, they are not as efficiently utilized since BNNs rely on binarized bit-level operations that are better suited for custom hardware. Finally, even though ASIC is still more efficient, FPGA can provide orders of magnitudes in efficiency improvements over software, without having to lock into a fixed ASIC solution.",
"title": ""
},
{
"docid": "d7574e4d5fd3a395907db7a7d380652b",
"text": "In this paper, we analyze and evaluate word embeddings for representation of longer texts in the multi-label document classification scenario. The embeddings are used in three convolutional neural network topologies. The experiments are realized on the Czech ČTK and English Reuters-21578 standard corpora. We compare the results of word2vec static and trainable embeddings with randomly initialized word vectors. We conclude that initialization does not play an important role for classification. However, learning of word vectors is crucial to obtain good results.",
"title": ""
},
{
"docid": "fbd05f764470b94af30c7799e94ff0f0",
"text": "Agent-based modeling of human social behavior is an increasingly important research area. A key factor in human social interaction is our beliefs about others, a theory of mind. Whether we believe a message depends not only on its content but also on our model of the communicator. How we act depends not only on the immediate effect but also on how we believe others will react. In this paper, we discuss PsychSim, an implemented multiagent-based simulation tool for modeling interactions and influence. While typical approaches to such modeling have used first-order logic, PsychSim agents have their own decision-theoretic model of the world, including beliefs about its environment and recursive models of other agents. Using these quantitative models of uncertainty and preferences, we have translated existing psychological theories into a decision-theoretic semantics that allow the agents to reason about degrees of believability in a novel way. We discuss PsychSim’s underlying architecture and describe its application to a school violence scenario for illustration.",
"title": ""
},
{
"docid": "18824b0ce748e097c049440439116b77",
"text": "Before we try to specify how to give a semantic analysis of discourse, we must define what semantic analysis is and what kinds of semantic analysis can be distinguished. Such a definition will be as complex as the number of semantic theories in the various disciplines involved in the study of language: linguistics and grammar, the philosophy of language, logic, cognitive psychology, and sociology, each with several competing semantic theories. These theories will be different according to their object of analysis, their aims, and their methods. Yet, they will also have some common properties that allow us to call them semantic theories. In this chapter I first enumerate more or less intuitively a number of these common properties, then select some of them for further theoretical analysis, and finally apply the theoretical notions in actual semantic analyses of some discourse fragments. In the most general sense, semantics is a component theory within a larger semiotic theory about meaningful, symbolic, behavior. Hence we have not only a semantics of natural language utterances or acts, but also of nonverbal or paraverbal behavior, such as gestures, pictures and films, logical systems or computer languages, sign languages of the deaf, and perhaps social interaction in general. In this chapter we consider only the semantics of natural-language utterances, that is, discourses, and their component elements, such as words, phrases, clauses, sentences, paragraphs, and other identifiable discourse units. Other semiotic aspects of verbal and nonverbal communication are treated elsewhere in this Handbook. Probably the most general concept used to denote the specific object",
"title": ""
},
{
"docid": "921c7a6c3902434b250548e573816978",
"text": "Energy harvesting based on tethered kites makes use of the advantage, that these airborne wind energy systems are able to exploit higher wind speeds at higher altitudes. The setup, considered in this paper, is based on the pumping cycle, which generates energy by winching out at high tether forces, driving an electrical generator while flying crosswind and winching in at a stationary neutral position, thus leaving a net amount of generated energy. The economic operation of such airborne wind energy plants demands for a reliable control system allowing for a complete autonomous operation of cycles. This task involves the flight control of the kite as well as the operation of a winch for the tether. The focus of this paper is put on the flight control, which implements an accurate direction control towards target points allowing for eight-down pattern flights. In addition, efficient winch control strategies are provided. The paper summarises a simple comprehensible model with equations of motion in order to motivate the approach of the control system design. After an extended overview on the control system, the flight controller parts are discussed in detail. Subsequently, the winch strategies based on an optimisation scheme are presented. In order to demonstrate the real world functionality of the presented algorithms, flight data from a fully automated pumping-cycle operation of a small-scale prototype setup based on a 30 m2 kite and a 50 kW electrical motor/generator is given.",
"title": ""
},
{
"docid": "51ece87cfa463cd76c6fd60e2515c9f4",
"text": "In a 1998 speech before the California Science Center in Los Angeles, then US VicePresident Al Gore called for a global undertaking to build a multi-faceted computing system for education and research, which he termed “Digital Earth.” The vision was that of a system providing access to what is known about the planet and its inhabitants’ activities – currently and for any time in history – via responses to queries and exploratory tools. Furthermore, it would accommodate modeling extensions for predicting future conditions. Organized efforts towards realizing that vision have diminished significantly since 2001, but progress on key requisites has been made. As the 10 year anniversary of that influential speech approaches, we re-examine it from the perspective of a systematic software design process and find the envisioned system to be in many respects inclusive of concepts of distributed geolibraries and digital atlases. A preliminary definition for a particular digital earth system as: “a comprehensive, distributed geographic information and knowledge organization system,” is offered and discussed. We suggest that resumption of earlier design and focused research efforts can and should be undertaken, and may prove a worthwhile “Grand Challenge” for the GIScience community.",
"title": ""
},
{
"docid": "1b6e35187b561de95051f67c70025152",
"text": "Ž . The technology acceptance model TAM proposes that ease of use and usefulness predict applications usage. The current research investigated TAM for work-related tasks with the World Wide Web as the application. One hundred and sixty-three subjects responded to an e-mail survey about a Web site they access often in their jobs. The results support TAM. They also Ž . Ž . demonstrate that 1 ease of understanding and ease of finding predict ease of use, and that 2 information quality predicts usefulness for revisited sites. In effect, the investigation applies TAM to help Web researchers, developers, and managers understand antecedents to users’ decisions to revisit sites relevant to their jobs. q 2000 Elsevier Science B.V. All rights reserved.",
"title": ""
},
{
"docid": "d90b6c61369ff0458843241cd30437ba",
"text": "The unprecedented challenges of creating Biosphere 2, the world's first laboratory for biospherics, the study of global ecology and long-term closed ecological system dynamics, led to breakthrough developments in many fields, and a deeper understanding of the opportunities and difficulties of material closure. This paper will review accomplishments and challenges, citing some of the key research findings and publications that have resulted from the experiments in Biosphere 2. Engineering accomplishments included development of a technique for variable volume to deal with pressure differences between the facility and outside environment, developing methods of atmospheric leak detection and sealing, while achieving new standards of closure, with an annual atmospheric leakrate of less than 10%, or less than 300 ppm per day. This degree of closure permitted detailed tracking of carbon dioxide, oxygen, and trace gases such as nitrous oxide and ethylene over the seasonal variability of two years. Full closure also necessitated developing new approaches and technologies for complete air, water, and wastewater recycle and reuse within the facility. The development of a soil-based highly productive agricultural system was a first in closed ecological systems, and much was learned about managing a wide variety of crops using non-chemical means of pest and disease control. Closed ecological systems have different temporal biogeochemical cycling and ranges of atmospheric components because of their smaller reservoirs of air, water and soil, and higher concentration of biomass, and Biosphere 2 provided detailed examination and modeling of these accelerated cycles over a period of closure which measured in years. Medical research inside Biosphere 2 included the effects on humans of lowered oxygen: the discovery that human productivity can be maintained with good health with lowered atmospheric oxygen levels could lead to major economies on the design of space stations and planetary/lunar settlements. The improved health resulting from the calorie-restricted but nutrient dense Biosphere 2 diet was the first such scientifically controlled experiment with humans. The success of Biosphere 2 in creating a diversity of terrestrial and marine environments, from rainforest to coral reef, allowed detailed studies with comprehensive measurements such that the dynamics of these complex biomic systems are now better understood. The coral reef ecosystem, the largest artificial reef ever built, catalyzed methods of study now being applied to planetary coral reef systems. Restoration ecology advanced through the creation and study of the dynamics of adaptation and self-organization of the biomes in Biosphere 2. The international interest that Biosphere 2 generated has given new impetus to the public recognition of the sciences of biospheres (biospherics), biomes and closed ecological life systems. The facility, although no longer a materially-closed ecological system, is being used as an educational facility by Columbia University as an introduction to the study of the biosphere and complex system ecology and for carbon dioxide impacts utilizing the complex ecosystems created in Biosphere '. The many lessons learned from Biosphere 2 are being used by its key team of creators in their design and operation of a laboratory-sized closed ecological system, the Laboratory Biosphere, in operation as of March 2002, and for the design of a Mars on Earth(TM) prototype life support system for manned missions to Mars and Mars surface habitats. Biosphere 2 is an important foundation for future advances in biospherics and closed ecological system research.",
"title": ""
},
{
"docid": "ffd4fc3c7d63eab3cc8a7129f31afdea",
"text": "The growth of desktop 3-D printers is driving an interest in recycled 3-D printer filament to reduce costs of distributed production. Life cycle analysis studies were performed on the recycling of high density polyethylene into filament suitable for additive layer manufacturing with 3-D printers. The conventional centralized recycling system for high population density and low population density rural locations was compared to the proposed in home, distributed recycling system. This system would involve shredding and then producing filament with an open-source plastic extruder from postconsumer plastics and then printing the extruded filament into usable, value-added parts and products with 3-D printers such as the open-source self replicating rapid prototyper, or RepRap. The embodied energy and carbon dioxide emissions were calculated for high density polyethylene recycling using SimaPro 7.2 and the database EcoInvent v2.0. The results showed that distributed recycling uses less embodied energy than the best-case scenario used for centralized recycling. For centralized recycling in a low-density population case study involving substantial embodied energy use for transportation and collection these savings for distributed recycling were found to extend to over 80%. If the distributed process is applied to the U.S. high density polyethylene currently recycled, more than 100 million MJ of energy could be conserved per annum along with the concomitant significant reductions in greenhouse gas emissions. It is concluded that with the open-source 3-D printing network expanding rapidly the potential for widespread adoption of in-home recycling of post-consumer plastic represents a novel path to a future of distributed manufacturing appropriate for both the developed and developing world with lower environmental impacts than the current system.",
"title": ""
},
{
"docid": "fa07419129af7100fc0bf38746f084aa",
"text": "We are witnessing a dramatic change in computer architecture due to the multicore paradigm shift, as every electronic device from cell phones to supercomputers confronts parallelism of unprecedented scale. To fully unleash the potential of these systems, the HPC community must develop multicore specific optimization methodologies for important scientific computations. In this work, we examine sparse matrix-vector multiply (SpMV) - one of the most heavily used kernels in scientific computing - across a broad spectrum of multicore designs. Our experimental platform includes the homogeneous AMD dual-core and Intel quad-core designs, the heterogeneous STI Cell, as well as the first scientific study of the highly multithreaded Sun Niagara2. We present several optimization strategies especially effective for the multicore environment, and demonstrate significant performance improvements compared to existing state-of-the-art serial and parallel SpMV implementations. Additionally, we present key insights into the architectural tradeoffs of leading multicore design strategies, in the context of demanding memory-bound numerical algorithms.",
"title": ""
},
{
"docid": "8baa6af3ee08029f0a555e4f4db4e218",
"text": "We introduce several probabilistic models for learning the lexicon of a semantic parser. Lexicon learning is the first step of training a semantic parser for a new application domain and the quality of the learned lexicon significantly affects both the accuracy and efficiency of the final semantic parser. Existing work on lexicon learning has focused on heuristic methods that lack convergence guarantees and require significant human input in the form of lexicon templates or annotated logical forms. In contrast, our probabilistic models are trained directly from question/answer pairs using EM and our simplest model has a concave objective that guarantees convergence to a global optimum. An experimental evaluation on a set of 4th grade science questions demonstrates that our models improve semantic parser accuracy (35-70% error reduction) and efficiency (4-25x more sentences per second) relative to prior work despite using less human input. Our models also obtain competitive results on GEO880 without any datasetspecific engineering.",
"title": ""
},
{
"docid": "836815216224b278df229927d825e411",
"text": "Logistics demand forecasting is important for investment decision-making of infrastructure and strategy programming of the logistics industry. In this paper, a hybrid method which combines the Grey Model, artificial neural networks and other techniques in both learning and analyzing phases is proposed to improve the precision and reliability of forecasting. After establishing a learning model GNNM(1,8) for road logistics demand forecasting, we chose road freight volume as target value and other economic indicators, i.e. GDP, production value of primary industry, total industrial output value, outcomes of tertiary industry, retail sale of social consumer goods, disposable personal income, and total foreign trade value as the seven key influencing factors for logistics demand. Actual data sequences of the province of Zhejiang from years 1986 to 2008 were collected as training and test-proof samples. By comparing the forecasting results, it turns out that GNNM(1,8) is an appropriate forecasting method to yield higher accuracy and lower mean absolute percentage errors than other individual models for short-term logistics demand forecasting.",
"title": ""
},
{
"docid": "16b8a948e76a04b1703646d5e6111afe",
"text": "Nanotechnology offers many potential benefits to cancer research through passive and active targeting, increased solubility/bioavailablility, and novel therapies. However, preclinical characterization of nanoparticles is complicated by the variety of materials, their unique surface properties, reactivity, and the task of tracking the individual components of multicomponent, multifunctional nanoparticle therapeutics in in vivo studies. There are also regulatory considerations and scale-up challenges that must be addressed. Despite these hurdles, cancer research has seen appreciable improvements in efficacy and quite a decrease in the toxicity of chemotherapeutics because of 'nanotech' formulations, and several engineered nanoparticle clinical trials are well underway. This article reviews some of the challenges and benefits of nanomedicine for cancer therapeutics and diagnostics.",
"title": ""
},
{
"docid": "a40c00b1dc4a8d795072e0a8cec09d7a",
"text": "Summary form only given. Most of current job scheduling systems for supercomputers and clusters provide batch queuing support. With the development of metacomputing and grid computing, users require resources managed by multiple local job schedulers. Advance reservations are becoming essential for job scheduling systems to be utilized within a large-scale computing environment with geographically distributed resources. COSY is a lightweight implementation of such a local job scheduler with support for both queue scheduling and advance reservations. COSY queue scheduling utilizes the FCFS algorithm with backfilling mechanisms and priority management. Advance reservations with COSY can provide effective QoS support for exact start time and latest completion time. Scheduling polices are defined to reject reservations with too short notice time so that there is no start time advantage to making a reservation over submitting to a queue. Further experimental results show that as a larger percentage of reservation requests are involved, a longer mandatory shortest notice time for advance reservations must be applied in order not to sacrifice queue scheduling efficiency.",
"title": ""
}
] | scidocsrr |
02ec7baec5a9136c14dd1e1aa8dde635 | Congestion Avoidance with Incremental Filter Aggregation in Content-Based Routing Networks | [
{
"docid": "7f7e7f7ddcbb4d98270c0ba50a3f7a25",
"text": "Workflow management systems are traditionally centralized, creating a single point of failure and a scalability bottleneck. In collaboration with Cybermation, Inc., we have developed a content-based publish/subscribe platform, called PADRES, which is a distributed middleware platform with features inspired by the requirements of workflow management and business process execution. These features constitute original additions to publish/subscribe systems and include an expressive subscription language, composite subscription processing, a rulebased matching and routing mechanism, historc, query-based data access, and the support for the decentralized execution of business process specified in XML. PADRES constitutes the basis for the next generation of enterprise management systems developed by Cybermation, Inc., including business process automation, monitoring, and execution applications.",
"title": ""
}
] | [
{
"docid": "840c42456a69d20deead9f8574f6ee14",
"text": "Millimeter wave (mmWave) is a promising approach for the fifth generation cellular networks. It has a large available bandwidth and high gain antennas, which can offer interference isolation and overcome high frequency-dependent path loss. In this paper, we study the non-uniform heterogeneous mmWave network. Non-uniform heterogeneous networks are more realistic in practical scenarios than traditional independent homogeneous Poisson point process (PPP) models. We derive the signal-to-noise-plus-interference ratio (SINR) and rate coverage probabilities for a two-tier non-uniform millimeter-wave heterogeneous cellular network, where the macrocell base stations (MBSs) are deployed as a homogeneous PPP and the picocell base stations (PBSs) are modeled as a Poisson hole process (PHP), dependent on the MBSs. Using tools from stochastic geometry, we derive the analytical results for the SINR and rate coverage probabilities. The simulation results validate the analytical expressions. Furthermore, we find that there exists an optimum density of the PBS that achieves the best coverage probability and the change rule with different radii of the exclusion region. Finally, we show that as expected, mmWave outperforms microWave cellular network in terms of rate coverage probability for this system.",
"title": ""
},
{
"docid": "d08c24228e43089824357342e0fa0843",
"text": "This paper presents a new register assignment heuristic for procedures in SSA Form, whose interference graphs are chordal; the heuristic is called optimistic chordal coloring (OCC). Previous register assignment heuristics eliminate copy instructions via coalescing, in other words, merging nodes in the interference graph. Node merging, however, can not preserve the chordal graph property, making it unappealing for SSA-based register allocation. OCC is based on graph coloring, but does not employ coalescing, and, consequently, preserves graph chordality, and does not increase its chromatic number; in this sense, OCC is conservative as well as optimistic. OCC is observed to eliminate at least as many dynamically executed copy instructions as iterated register coalescing (IRC) for a set of chordal interference graphs generated from several Mediabench and MiBench applications. In many cases, OCC and IRC were able to find optimal or near-optimal solutions for these graphs. OCC ran 1.89x faster than IRC, on average.",
"title": ""
},
{
"docid": "7df3fe3ffffaac2fb6137fdc440eb9f4",
"text": "The amount of information in medical publications continues to increase at a tremendous rate. Systematic reviews help to process this growing body of information. They are fundamental tools for evidence-based medicine. In this paper, we show that automatic text classification can be useful in building systematic reviews for medical topics to speed up the reviewing process. We propose a per-question classification method that uses an ensemble of classifiers that exploit the particular protocol of a systematic review. We also show that when integrating the classifier in the human workflow of building a review the per-question method is superior to the global method. We test several evaluation measures on a real dataset.",
"title": ""
},
{
"docid": "c2f807e336be1b8d918d716c07668ae1",
"text": "The present article proposes and describes a new ZCS non-isolated bidirectional buck-boost DC-DC converter for energy storage applications in electric vehicles. Usually, the conventional converters are adapted with an auxiliary resonant cell to provide the zero current switching turn-on/turn-off condition for the main switching devices. The advantages of proposed converter has reduced switching losses, reduced component count and improved efficiency. The proposed converter operates either in boost or buck mode. This paper mainly deals with the operating principles, analysis and design simulations of the proposed converter in order to prove the better soft-switching capability, reduced switching losses and efficiency improvement than the conventional converter.",
"title": ""
},
{
"docid": "cbc4fc5d233c55fcc065fcc64b0404d8",
"text": "PURPOSE\nTo determine if noise damage in the organ of Corti is different in the low- and high-frequency regions of the cochlea.\n\n\nMATERIALS AND METHODS\nChinchillas were exposed for 2 to 432 days to a 0.5 (low-frequency) or 4 kHz (high-frequency) octave band of noise at 47 to 95 dB sound pressure level. Auditory thresholds were determined before, during, and after the noise exposure. The cochleas were examined microscopically as plastic-embedded flat preparations. Missing cells were counted, and the sequence of degeneration was determined as a function of recovery time (0-30 days).\n\n\nRESULTS\nWith high-frequency noise, primary damage began as small focal losses of outer hair cells in the 4-8 kHz region. With continued exposure, damage progressed to involve loss of an entire segment of the organ of Corti, along with adjacent myelinated nerve fibers. Much of the latter loss is secondary to the intermixing of cochlear fluids through the damaged reticular lamina. With low-frequency noise, primary damage appeared as outer hair cell loss scattered over a broad area in the apex. With continued exposure, additional apical outer hair cells degenerated, while supporting cells, inner hair cells, and nerve fibers remained intact. Continued exposure to low-frequency noise also resulted in focal lesions in the basal cochlea that were indistinguishable from those resulting from exposure to high-frequency noise.\n\n\nCONCLUSIONS\nThe patterns of cochlear damage and their relation to functional measures of hearing in noise-exposed chinchillas are similar to those seen in noise-exposed humans. Thus, the chinchilla is an excellent model for studying noise effects, with the long-term goal of identifying ways to limit noise-induced hearing loss in humans.",
"title": ""
},
{
"docid": "1dbaa72cd95c32d1894750357e300529",
"text": "In recognizing the importance of educating aspiring scientists in the responsible conduct of research (RCR), the Office of Research Integrity (ORI) began sponsoring the creation of instructional resources to address this pressing need in 2002. The present guide on avoiding plagiarism and other inappropriate writing practices was created to help students, as well as professionals, identify and prevent such malpractices and to develop an awareness of ethical writing and authorship. This guide is one of the many products stemming from ORI’s effort to promote the RCR.",
"title": ""
},
{
"docid": "e7a86eeb576d4aca3b5e98dc53fcb52d",
"text": "Dictionary methods for cross-language information retrieval give performance below that for mono-lingual retrieval. Failure to translate multi-term phrases has km shown to be one of the factors responsible for the errors associated with dictionary methods. First, we study the importance of phrasaI translation for this approach. Second, we explore the role of phrases in query expansion via local context analysis and local feedback and show how they can be used to significantly reduce the error associated with automatic dictionary translation.",
"title": ""
},
{
"docid": "224cb33193938d5bfb8d604a86d3641a",
"text": "We show how machine vision, learning, and planning can be combined to solve hierarchical consensus tasks. Hierarchical consensus tasks seek correct answers to a hierarchy of subtasks, where branching depends on answers at preceding levels of the hierarchy. We construct a set of hierarchical classification models that aggregate machine and human effort on different subtasks and use these inferences in planning. Optimal solution of hierarchical tasks is intractable due to the branching of task hierarchy and the long horizon of these tasks. We study Monte Carlo planning procedures that can exploit task structure to constrain the policy space for tractability. We evaluate the procedures on data collected from Galaxy Zoo II in allocating human effort and show that significant gains can be achieved.",
"title": ""
},
{
"docid": "21be75a852ab69d391d8d6f4ed911f46",
"text": "We have been developing an exoskeleton robot (ExoRob) for assisting daily upper limb movements (i.e., shoulder, elbow and wrist). In this paper we have focused on the development of a 2DOF ExoRob to rehabilitate elbow joint flexion/extension and shoulder joint internal/external rotation, as a step toward the development of a complete (i.e., 3DOF) shoulder motion assisted exoskeleton robot. The proposed ExoRob is designed to be worn on the lateral side of the upper arm in order to provide naturalistic movements at the level of elbow (flexion/extension) and shoulder joint internal/external rotation. This paper also focuses on the modeling and control of the proposed ExoRob. A kinematic model of ExoRob has been developed based on modified Denavit-Hartenberg notations. In dynamic simulations of the proposed ExoRob, a novel nonlinear sliding mode control technique with exponential reaching law and computed torque control technique is employed, where trajectory tracking that corresponds to typical rehab (passive) exercises has been carried out to evaluate the effectiveness of the developed model and controller. Simulated results show that the controller is able to drive the ExoRob efficiently to track the desired trajectories, which in this case consisted in passive arm movements. Such movements are used in rehabilitation and could be performed very efficiently with the developed ExoRob and the controller. Experiments were carried out to validate the simulated results as well as to evaluate the performance of the controller.",
"title": ""
},
{
"docid": "abf845c459ed415ac77ba91615d7b674",
"text": "We study the online market for peer-to-peer (P2P) lending, in which individuals bid on unsecured microloans sought by other individual borrowers. Using a large sample of consummated and failed listings from the largest online P2P lending marketplace Prosper.com, we test whether social networks lead to better lending outcomes, focusing on the distinction between the structural and relational aspects of networks. While the structural aspects have limited to no significance, the relational aspects are consistently significant predictors of lending outcomes, with a striking gradation based on the verifiability and visibility of a borrower’s social capital. Stronger and more verifiable relational network measures are associated with a higher likelihood of a loan being funded, a lower risk of default, and lower interest rates. We discuss the implications of our findings for financial disintermediation and the design of decentralized electronic lending markets. This version: October 2009 ∗Decision, Operations and Information Technologies Department, **Finance Department. All the authors are at Robert H. Smith School of Business, University of Maryland, College Park, MD 20742. Mingfeng Lin can be reached at [email protected]. Prabhala can be reached at [email protected]. Viswanathan can be reached at [email protected]. The authors thank Ethan Cohen-Cole, Sanjiv Das, Jerry Hoberg, Dalida Kadyrzhanova, Nikunj Kapadia, De Liu, Vojislav Maksimovic, Gordon Phillips, Kislaya Prasad, Galit Shmueli, Kelly Shue, and seminar participants at Carnegie Mellon University, University of Utah, the 2008 Summer Doctoral Program of the Oxford Internet Institute, the 2008 INFORMS Annual Conference, the Workshop on Information Systems and Economics (Paris), and Western Finance Association for their valuable comments and suggestions. Mingfeng Lin also thanks to the Ewing Marion Kauffman Foundation for the 2009 Dissertation Fellowship Award, and to the Economic Club of Washington D.C. (2008) for their generous financial support. We also thank Prosper.com for making the data for the study available. The contents of this publication are the sole responsibility of the authors. Judging Borrowers By The Company They Keep: Social Networks and Adverse Selection in Online Peer-to-Peer Lending",
"title": ""
},
{
"docid": "c20da8ccf60fbb753815d006627fa673",
"text": "This paper presents LiteOS, a multi-threaded operating system that provides Unix-like abstractions for wireless sensor networks. Aiming to be an easy-to-use platform, LiteOS offers a number of novel features, including: (1) a hierarchical file system and a wireless shell interface for user interaction using UNIX-like commands; (2) kernel support for dynamic loading and native execution of multithreaded applications; and (3) online debugging, dynamic memory, and file system assisted communication stacks. LiteOS also supports software updates through a separation between the kernel and user applications, which are bridged through a suite of system calls. Besides the features that have been implemented, we also describe our perspective on LiteOS as an enabling platform. We evaluate the platform experimentally by measuring the performance of common tasks, and demonstrate its programmability through twenty-one example applications.",
"title": ""
},
{
"docid": "16d52c166a96c5d0d40479530cf52d2b",
"text": "The dorsolateral prefrontal cortex (DLPFC) plays a crucial role in working memory. Notably, persistent activity in the DLPFC is often observed during the retention interval of delayed response tasks. The code carried by the persistent activity remains unclear, however. We critically evaluate how well recent findings from functional magnetic resonance imaging studies are compatible with current models of the role of the DLFPC in working memory. These new findings suggest that the DLPFC aids in the maintenance of information by directing attention to internal representations of sensory stimuli and motor plans that are stored in more posterior regions.",
"title": ""
},
{
"docid": "1a13a0d13e0925e327c9b151b3e5b32d",
"text": "The topic of this thesis is fraud detection in mobile communications networks by means of user profiling and classification techniques. The goal is to first identify relevant user groups based on call data and then to assign a user to a relevant group. Fraud may be defined as a dishonest or illegal use of services, with the intention to avoid service charges. Fraud detection is an important application, since network operators lose a relevant portion of their revenue to fraud. Whereas the intentions of the mobile phone users cannot be observed, it is assumed that the intentions are reflected in the call data. The call data is subsequently used in describing behavioral patterns of users. Neural networks and probabilistic models are employed in learning these usage patterns from call data. These models are used either to detect abrupt changes in established usage patterns or to recognize typical usage patterns of fraud. The methods are shown to be effective in detecting fraudulent behavior by empirically testing the methods with data from real mobile communications networks. © All rights reserved. No part of this publication may be reproduced, stored in a retrieval system, or transmitted, in any form or by any means, electronic, mechanical, photocopying, recording, or otherwise, without the prior permission of the author.",
"title": ""
},
{
"docid": "24e0fb7247644ba6324de9c86fdfeb12",
"text": "There has recently been a surge of work in explanatory artificial intelligence (XAI). This research area tackles the important problem that complex machines and algorithms often cannot provide insights into their behavior and thought processes. XAI allows users and parts of the internal system to be more transparent, providing explanations of their decisions in some level of detail. These explanations are important to ensure algorithmic fairness, identify potential bias/problems in the training data, and to ensure that the algorithms perform as expected. However, explanations produced by these systems is neither standardized nor systematically assessed. In an effort to create best practices and identify open challenges, we provide our definition of explainability and show how it can be used to classify existing literature. We discuss why current approaches to explanatory methods especially for deep neural networks are insufficient. Finally, based on our survey, we conclude with suggested future research directions for explanatory artificial intelligence.",
"title": ""
},
{
"docid": "bfae60b46b97cf2491d6b1136c60f6a6",
"text": "Educational data mining concerns with developing methods for discovering knowledge from data that come from educational domain. In this paper we used educational data mining to improve graduate students’ performance, and overcome the problem of low grades of graduate students. In our case study we try to extract useful knowledge from graduate students data collected from the college of Science and Technology – Khanyounis. The data include fifteen years period [1993-2007]. After preprocessing the data, we applied data mining techniques to discover association, classification, clustering and outlier detection rules. In each of these four tasks, we present the extracted knowledge and describe its importance in educational domain.",
"title": ""
},
{
"docid": "2e6af4ea3a375f67ce5df110a31aeb85",
"text": "Controlled power system separation, which separates the transmission system into islands in a controlled manner, is considered the final resort against a blackout under severe disturbances, e.g., cascading events. Three critical problems of controlled separation are where and when to separate and what to do after separation, which are rarely studied together. They are addressed in this paper by a proposed unified controlled separation scheme based on synchrophasors. The scheme decouples the three problems by partitioning them into sub-problems handled strategically in three time stages: the Offline Analysis stage determines elementary generator groups, optimizes potential separation points in between, and designs post-separation control strategies; the Online Monitoring stage predicts separation boundaries by modal analysis on synchrophasor data; the Real-time Control stage calculates a synchrophasor-based separation risk index for each boundary to predict the time to perform separation. The proposed scheme is demonstrated on a 179-bus power system by case studies.",
"title": ""
},
{
"docid": "499e2c0a0170d5b447548f85d4a9f402",
"text": "OBJECTIVE\nTo discuss the role of proprioception in motor control and in activation of the dynamic restraints for functional joint stability.\n\n\nDATA SOURCES\nInformation was drawn from an extensive MEDLINE search of the scientific literature conducted in the areas of proprioception, motor control, neuromuscular control, and mechanisms of functional joint stability for the years 1970-1999.\n\n\nDATA SYNTHESIS\nProprioception is conveyed to all levels of the central nervous system. It serves fundamental roles for optimal motor control and sensorimotor control over the dynamic restraints.\n\n\nCONCLUSIONS/APPLICATIONS\nAlthough controversy remains over the precise contributions of specific mechanoreceptors, proprioception as a whole is an essential component to controlling activation of the dynamic restraints and motor control. Enhanced muscle stiffness, of which muscle spindles are a crucial element, is argued to be an important characteristic for dynamic joint stability. Articular mechanoreceptors are attributed instrumental influence over gamma motor neuron activation, and therefore, serve to indirectly influence muscle stiffness. In addition, articular mechanoreceptors appear to influence higher motor center control over the dynamic restraints. Further research conducted in these areas will continue to assist in providing a scientific basis to the selection and development of clinical procedures.",
"title": ""
},
{
"docid": "3024c0cd172eb2a3ec33e0383ac8ba18",
"text": "The Android packaging model offers ample opportunities for malware writers to piggyback malicious code in popular apps, which can then be easily spread to a large user base. Although recent research has produced approaches and tools to identify piggybacked apps, the literature lacks a comprehensive investigation into such phenomenon. We fill this gap by: 1) systematically building a large set of piggybacked and benign apps pairs, which we release to the community; 2) empirically studying the characteristics of malicious piggybacked apps in comparison with their benign counterparts; and 3) providing insights on piggybacking processes. Among several findings providing insights analysis techniques should build upon to improve the overall detection and classification accuracy of piggybacked apps, we show that piggybacking operations not only concern app code, but also extensively manipulates app resource files, largely contradicting common beliefs. We also find that piggybacking is done with little sophistication, in many cases automatically, and often via library code.",
"title": ""
},
{
"docid": "b853f492667d4275295c0228566f4479",
"text": "This study reports spore germination, early gametophyte development and change in the reproductive phase of Drynaria fortunei, a medicinal fern, in response to changes in pH and light spectra. Germination of D. fortunei spores occurred on a wide range of pH from 3.7 to 9.7. The highest germination (63.3%) occurred on ½ strength Murashige and Skoog basal medium supplemented with 2% sucrose at pH 7.7 under white light condition. Among the different light spectra tested, red, far-red, blue, and white light resulted in 71.3, 42.3, 52.7, and 71.0% spore germination, respectively. There were no morphological differences among gametophytes grown under white and blue light. Elongated or filamentous but multiseriate gametophytes developed under red light, whereas under far-red light gametophytes grew as uniseriate filaments consisting of mostly elongated cells. Different light spectra influenced development of antheridia and archegonia in the gametophytes. Gametophytes gave rise to new gametophytes and developed antheridia and archegonia after they were transferred to culture flasks. After these gametophytes were transferred to plastic tray cells with potting mix of tree fern trunk fiber mix (TFTF mix) and peatmoss the highest number of sporophytes was found. Sporophytes grown in pots developed rhizomes.",
"title": ""
},
{
"docid": "f3cfd3e026c368146102185c31761fd2",
"text": "In this paper, we summarize the human emotion recognition using different set of electroencephalogram (EEG) channels using discrete wavelet transform. An audio-visual induction based protocol has been designed with more dynamic emotional content for inducing discrete emotions (disgust, happy, surprise, fear and neutral). EEG signals are collected using 64 electrodes from 20 subjects and are placed over the entire scalp using International 10-10 system. The raw EEG signals are preprocessed using Surface Laplacian (SL) filtering method and decomposed into three different frequency bands (alpha, beta and gamma) using Discrete Wavelet Transform (DWT). We have used “db4” wavelet function for deriving a set of conventional and modified energy based features from the EEG signals for classifying emotions. Two simple pattern classification methods, K Nearest Neighbor (KNN) and Linear Discriminant Analysis (LDA) methods are used and their performances are compared for emotional states classification. The experimental results indicate that, one of the proposed features (ALREE) gives the maximum average classification rate of 83.26% using KNN and 75.21% using LDA compared to those of conventional features. Finally, we present the average classification rate and subsets of emotions classification rate of these two different classifiers for justifying the performance of our emotion recognition system.",
"title": ""
}
] | scidocsrr |
7aa6ca63560cbb00fb545ad439475c9b | CAAD: Computer Architecture for Autonomous Driving | [
{
"docid": "368a37e8247d8a6f446b31f1dc0f635e",
"text": "In order to achieve autonomous operation of a vehicle in urban situations with unpredictable traffic, several realtime systems must interoperate, including environment perception, localization, planning, and control. In addition, a robust vehicle platform with appropriate sensors, computational hardware, networking, and software infrastructure is essential.",
"title": ""
},
{
"docid": "ed9d6571634f30797fb338a928cc8361",
"text": "In this paper, we study the challenging problem of tracking the trajectory of a moving object in a video with possibly very complex background. In contrast to most existing trackers which only learn the appearance of the tracked object online, we take a different approach, inspired by recent advances in deep learning architectures, by putting more emphasis on the (unsupervised) feature learning problem. Specifically, by using auxiliary natural images, we train a stacked denoising autoencoder offline to learn generic image features that are more robust against variations. This is then followed by knowledge transfer from offline training to the online tracking process. Online tracking involves a classification neural network which is constructed from the encoder part of the trained autoencoder as a feature extractor and an additional classification layer. Both the feature extractor and the classifier can be further tuned to adapt to appearance changes of the moving object. Comparison with the state-of-the-art trackers on some challenging benchmark video sequences shows that our deep learning tracker is more accurate while maintaining low computational cost with real-time performance when our MATLAB implementation of the tracker is used with a modest graphics processing unit (GPU).",
"title": ""
}
] | [
{
"docid": "35da724255bbceb859d01ccaa0dec3b1",
"text": "A linear differential equation with rational function coefficients has a Bessel type solution when it is solvable in terms of <i>B</i><sub><i>v</i></sub>(<i>f</i>), <i>B</i><sub><i>v</i>+1</sub>(<i>f</i>). For second order equations, with rational function coefficients, <i>f</i> must be a rational function or the square root of a rational function. An algorithm was given by Debeerst, van Hoeij, and Koepf, that can compute Bessel type solutions if and only if <i>f</i> is a rational function. In this paper we extend this work to the square root case, resulting in a complete algorithm to find all Bessel type solutions.",
"title": ""
},
{
"docid": "6195cf6b266d070cce5ff705daa84db7",
"text": "The geographical properties of words have recently begun to be exploited for geolocating documents based solely on their text, often in the context of social media and online content. One common approach for geolocating texts is rooted in information retrieval. Given training documents labeled with latitude/longitude coordinates, a grid is overlaid on the Earth and pseudo-documents constructed by concatenating the documents within a given grid cell; then a location for a test document is chosen based on the most similar pseudo-document. Uniform grids are normally used, but they are sensitive to the dispersion of documents over the earth. We define an alternative grid construction using k-d trees that more robustly adapts to data, especially with larger training sets. We also provide a better way of choosing the locations for pseudo-documents. We evaluate these strategies on existing Wikipedia and Twitter corpora, as well as a new, larger Twitter corpus. The adaptive grid achieves competitive results with a uniform grid on small training sets and outperforms it on the large Twitter corpus. The two grid constructions can also be combined to produce consistently strong results across all training sets.",
"title": ""
},
{
"docid": "133b2f033245dad2a2f35ff621741b2f",
"text": "In wireless sensor networks (WSNs), long lifetime requirement of different applications and limited energy storage capability of sensor nodes has led us to find out new horizons for reducing power consumption upon nodes. To increase sensor node's lifetime, circuit and protocols have to be energy efficient so that they can make a priori reactions by estimating and predicting energy consumption. The goal of this study is to present and discuss several strategies such as power-aware protocols, cross-layer optimization, and harvesting technologies used to alleviate power consumption constraint in WSNs.",
"title": ""
},
{
"docid": "e34ef27660f2e084d22863060b1c6ab1",
"text": "Plants are widely used in many indigenous systems of medicine for therapeutic purposes and are increasingly becoming popular in modern society as alternatives to synthetic medicines. Bioactive principles are derived from the products of plant primary metabolites, which are associated with the process of photosynthesis. The present review highlighted the chemical diversity and medicinal potentials of bioactive principles as well inherent toxicity concerns associated with the use of these plant products, which are of relevance to the clinician, pharmacist or toxicologist. Plant materials are composed of vast array of bioactive principles of which their isolation, identification and characterization for analytical evaluation requires expertise with cutting edge analytical protocols and instrumentations. Bioactive principles are responsible for the therapeutic activities of medicinal plants and provide unlimited opportunities for new drug leads because of their unmatched availability and chemical diversity. For the most part, the beneficial or toxic outcomes of standardized plant extracts depend on the chemical peculiarities of the containing bioactive principles.",
"title": ""
},
{
"docid": "8948409bbfe3e4d7a9384ef85383679e",
"text": "The security of today's Web rests in part on the set of X.509 certificate authorities trusted by each user's browser. Users generally do not themselves configure their browser's root store but instead rely upon decisions made by the suppliers of either the browsers or the devices upon which they run. In this work we explore the nature and implications of these trust decisions for Android users. Drawing upon datasets collected by Netalyzr for Android and ICSI's Certificate Notary, we characterize the certificate root store population present in mobile devices in the wild. Motivated by concerns that bloated root stores increase the attack surface of mobile users, we report on the interplay of certificate sets deployed by the device manufacturers, mobile operators, and the Android OS. We identify certificates installed exclusively by apps on rooted devices, thus breaking the audited and supervised root store model, and also discover use of TLS interception via HTTPS proxies employed by a market research company.",
"title": ""
},
{
"docid": "07f1caa5f4c0550e3223e587239c0a14",
"text": "Due to the unavailable GPS signals in indoor environments, indoor localization has become an increasingly heated research topic in recent years. Researchers in robotics community have tried many approaches, but this is still an unsolved problem considering the balance between performance and cost. The widely deployed low-cost WiFi infrastructure provides a great opportunity for indoor localization. In this paper, we develop a system for WiFi signal strength-based indoor localization and implement two approaches. The first is improved KNN algorithm-based fingerprint matching method, and the other is the Gaussian Process Regression (GPR) with Bayes Filter approach. We conduct experiments to compare the improved KNN algorithm with the classical KNN algorithm and evaluate the localization performance of the GPR with Bayes Filter approach. The experiment results show that the improved KNN algorithm can bring enhancement for the fingerprint matching method compared with the classical KNN algorithm. In addition, the GPR with Bayes Filter approach can provide about 2m localization accuracy for our test environment.",
"title": ""
},
{
"docid": "f9b6662dc19c47892bb7b95c5b7dc181",
"text": "The ability to update firmware is a feature that is found in nearly all modern embedded systems. We demonstrate how this feature can be exploited to allow attackers to inject malicious firmware modifications into vulnerable embedded devices. We discuss techniques for exploiting such vulnerable functionality and the implementation of a proof of concept printer malware capable of network reconnaissance, data exfiltration and propagation to general purpose computers and other embedded device types. We present a case study of the HP-RFU (Remote Firmware Update) LaserJet printer firmware modification vulnerability, which allows arbitrary injection of malware into the printer’s firmware via standard printed documents. We show vulnerable population data gathered by continuously tracking all publicly accessible printers discovered through an exhaustive scan of IPv4 space. To show that firmware update signing is not the panacea of embedded defense, we present an analysis of known vulnerabilities found in third-party libraries in 373 LaserJet firmware images. Prior research has shown that the design flaws and vulnerabilities presented in this paper are found in other modern embedded systems. Thus, the exploitation techniques presented in this paper can be generalized to compromise other embedded systems. Keywords-Embedded system exploitation; Firmware modification attack; Embedded system rootkit; HP-RFU vulnerability.",
"title": ""
},
{
"docid": "b9b0b6974353d4cad948b0681d8bf23b",
"text": "We describe a novel approach to modeling idiosyncra tic prosodic behavior for automatic speaker recognition. The approach computes various duration , pitch, and energy features for each estimated syl lable in speech recognition output, quantizes the featur s, forms N-grams of the quantized values, and mode ls normalized counts for each feature N-gram using sup port vector machines (SVMs). We refer to these features as “SNERF-grams” (N-grams of Syllable-base d Nonuniform Extraction Region Features). Evaluation of SNERF-gram performance is conducted o n two-party spontaneous English conversational telephone data from the Fisher corpus, using one co versation side in both training and testing. Resul ts show that SNERF-grams provide significant performance ga ins when combined with a state-of-the-art baseline system, as well as with two highly successful longrange feature systems that capture word usage and lexically constrained duration patterns. Further ex periments examine the relative contributions of fea tures by quantization resolution, N-gram length, and feature type. Results show that the optimal number of bins depends on both feature type and N-gram length, but is roughly in the range of 5 to 10 bins. We find t hat longer N-grams are better than shorter ones, and th at pitch features are most useful, followed by dura tion and energy features. The most important pitch features are those capturing pitch level, whereas the most important energy features reflect patterns of risin g a d falling. For duration features, nucleus dura tion is more important for speaker recognition than are dur ations from the onset or coda of a syllable. Overal l, we find that SVM modeling of prosodic feature sequence s yields valuable information for automatic speaker recognition. It also offers rich new opportunities for exploring how speakers differ from each other i n voluntary but habitual ways.",
"title": ""
},
{
"docid": "e2459b9991cfda1e81119e27927140c5",
"text": "This research demo describes the implementation of a mobile AR-supported educational course application, AR Circuit, which is designed to promote the effectiveness of remote collaborative learning for physics. The application employs the TCP/IP protocol enabling multiplayer functionality in a mobile AR environment. One phone acts as the server and the other acts as the client. The server phone will capture the video frames, process the video frame, and send the current frame and the markers transformation matrices to the client phone.",
"title": ""
},
{
"docid": "1349ee751afaddd06f81da2b92198537",
"text": "Rapid changes in mobile cloud computing tremendously affect the telecommunication, education and healthcare industries and also business perspectives. Nowadays, advanced information and communication technology enhanced healthcare sector to improved medical services at reduced cost. However, issues related to security, privacy, quality of services and mobility and viability need to be solved before mobile cloud computing can be adopted in the healthcare industry. Mobile healthcare (mHealthcare) is one of the latest technologies in the healthcare industry which enable the industry players to collaborate each other’s especially in sharing the patience’s medical reports and histories. MHealthcare offer real-time monitoring and provide rapid diagnosis of health condition. User’s context such as location, identities and etc which are collected by active sensor is important element in MHealthcare. This paper conducts a study pertaining to mobile cloud healthcare, mobile healthcare and comparisons between the variety of applications and architecture developed/proposed by researchers.",
"title": ""
},
{
"docid": "37913e0bfe44ab63c0c229c20b53c779",
"text": "The authors present several versions of a general model, titled the E-Z Reader model, of eye movement control in reading. The major goal of the modeling is to relate cognitive processing (specifically aspects of lexical access) to eye movements in reading. The earliest and simplest versions of the model (E-Z Readers 1 and 2) merely attempt to explain the total time spent on a word before moving forward (the gaze duration) and the probability of fixating a word; later versions (E-Z Readers 3-5) also attempt to explain the durations of individual fixations on individual words and the number of fixations on individual words. The final version (E-Z Reader 5) appears to be psychologically plausible and gives a good account of many phenomena in reading. It is also a good tool for analyzing eye movement data in reading. Limitations of the model and directions for future research are also discussed.",
"title": ""
},
{
"docid": "ebc7f54b969eb491afb7032f6c2a46b6",
"text": "The Wi-Fi fingerprinting (WF) technique normally suffers from the RSS (Received Signal Strength) variance problem caused by environmental changes that are inherent in both the training and localization phases. Several calibration algorithms have been proposed but they only focus on the hardware variance problem. Moreover, smartphones were not evaluated and these are now widely used in WF systems. In this paper, we analyze various aspect of the RSS variance problem when using smartphones for WF: device type, device placement, user direction, and environmental changes over time. To overcome the RSS variance problem, we also propose a smartphone-based, indoor pedestrian-tracking system. The scheme uses the location where the maximum RSS is observed, which is preserved even though RSS varies significantly. We experimentally validate that the proposed system is robust to the RSS variance problem.",
"title": ""
},
{
"docid": "a90802bd8cb132334999e6376053d5ef",
"text": "We use single-agent and multi-agent Reinforcement Learning (RL) for learning dialogue policies in a resource allocation negotiation scenario. Two agents learn concurrently by interacting with each other without any need for simulated users (SUs) to train against or corpora to learn from. In particular, we compare the Qlearning, Policy Hill-Climbing (PHC) and Win or Learn Fast Policy Hill-Climbing (PHC-WoLF) algorithms, varying the scenario complexity (state space size), the number of training episodes, the learning rate, and the exploration rate. Our results show that generally Q-learning fails to converge whereas PHC and PHC-WoLF always converge and perform similarly. We also show that very high gradually decreasing exploration rates are required for convergence. We conclude that multiagent RL of dialogue policies is a promising alternative to using single-agent RL and SUs or learning directly from corpora.",
"title": ""
},
{
"docid": "8f9bf08bb52e5c192512f7b43ed50ba7",
"text": "Finding the sparse solution of an underdetermined system of linear equations (the so called sparse recovery problem) has been extensively studied in the last decade because of its applications in many different areas. So, there are now many sparse recovery algorithms (and program codes) available. However, most of these algorithms have been developed for real-valued systems. This paper discusses an approach for using available real-valued algorithms (or program codes) to solve complex-valued problems, too. The basic idea is to convert the complex-valued problem to an equivalent real-valued problem and solve this new real-valued problem using any real-valued sparse recovery algorithm. Theoretical guarantees for the success of this approach will be discussed, too. On the other hand, a widely used sparse recovery idea is finding the minimum ℓ1 norm solution. For real-valued systems, this idea requires to solve a linear programming (LP) problem, but for complex-valued systems it needs to solve a second-order cone programming (SOCP) problem, which demands more computational load. However, based on the approach of this paper, the complex case can also be solved by linear programming, although the theoretical guarantee for finding the sparse solution is more limited.",
"title": ""
},
{
"docid": "b50498964a73a59f54b3a213f2626935",
"text": "To reduce the significant redundancy in deep Convolutional Neural Networks (CNNs), most existing methods prune neurons by only considering the statistics of an individual layer or two consecutive layers (e.g., prune one layer to minimize the reconstruction error of the next layer), ignoring the effect of error propagation in deep networks. In contrast, we argue that for a pruned network to retain its predictive power, it is essential to prune neurons in the entire neuron network jointly based on a unified goal: minimizing the reconstruction error of important responses in the \"final response layer\" (FRL), which is the second-to-last layer before classification. Specifically, we apply feature ranking techniques to measure the importance of each neuron in the FRL, formulate network pruning as a binary integer optimization problem, and derive a closed-form solution to it for pruning neurons in earlier layers. Based on our theoretical analysis, we propose the Neuron Importance Score Propagation (NISP) algorithm to propagate the importance scores of final responses to every neuron in the network. The CNN is pruned by removing neurons with least importance, and it is then fine-tuned to recover its predictive power. NISP is evaluated on several datasets with multiple CNN models and demonstrated to achieve significant acceleration and compression with negligible accuracy loss.",
"title": ""
},
{
"docid": "81f5905805f6faea108995cbe74a8435",
"text": "In simultaneous electroencephalogram (EEG) and functional magnetic resonance imaging (fMRI) studies, average reference (AR), and digitally linked mastoid (LM) are popular re-referencing techniques in event-related potential (ERP) analyses. However, they may introduce their own physiological signals and alter the EEG/ERP outcome. A reference electrode standardization technique (REST) that calculated a reference point at infinity was proposed to solve this problem. To confirm the advantage of REST in ERP analyses of synchronous EEG-fMRI studies, we compared the reference effect of AR, LM, and REST on task-related ERP results of a working memory task during an fMRI scan. As we hypothesized, we found that the adopted reference did not change the topography map of ERP components (N1 and P300 in the present study), but it did alter the task-related effect on ERP components. LM decreased or eliminated the visual working memory (VWM) load effect on P300, and the AR distorted the distribution of VWM location-related effect at left posterior electrodes as shown in the statistical parametric scalp mapping (SPSM) of N1. ERP cortical source estimates, which are independent of the EEG reference choice, were used as the golden standard to infer the relative utility of different references on the ERP task-related effect. By comparison, REST reference provided a more integrated and reasonable result. These results were further confirmed by the results of fMRI activations and a corresponding EEG-only study. Thus, we recommend the REST, especially with a realistic head model, as the optimal reference method for ERP data analysis in simultaneous EEG-fMRI studies.",
"title": ""
},
{
"docid": "d1e43c347f708547aefa07b3c83ee428",
"text": "Studies using Nomura et al.’s “Negative Attitude toward Robots Scale” (NARS) [1] as an attitudinal measure have featured robots that were perceived to be autonomous, independent agents. State of the art telepresence robots require an explicit human-in-the-loop to drive the robot around. In this paper, we investigate if NARS can be used with telepresence robots. To this end, we conducted three studies in which people watched videos of telepresence robots (n=70), operated telepresence robots (n=38), and interacted with telepresence robots (n=12). Overall, the results from our three studies indicated that NARS may be applied to telepresence robots, and culture, gender, and prior robot experience can be influential factors on the NARS score.",
"title": ""
},
{
"docid": "6726479c1b8e5502552dfb8e4fdccb0d",
"text": "Cluster ensembles generate a large number of different clustering solutions and combine them into a more robust and accurate consensus clustering. On forming the ensembles, the literature has suggested that higher diversity among ensemble members produces higher performance gain. In contrast, some studies also indicated that medium diversity leads to the best performing ensembles. Such contradicting observations suggest that different data, with varying characteristics, may require different treatments. We empirically investigate this issue by examining the behavior of cluster ensembles on benchmark data sets. This leads to a novel framework that selects ensemble members for each data set based on its own characteristics. Our framework first generates a diverse set of solutions and combines them into a consensus partition P*. Based on the diversity between the ensemble members and P*, a subset of ensemble members is selected and combined to obtain the final output. We evaluate the proposed method on benchmark data sets and the results show that the proposed method can significantly improve the clustering performance, often by a substantial margin. In some cases, we were able to produce final solutions that significantly outperform even the best ensemble members.",
"title": ""
},
{
"docid": "99880fca88bef760741f48166a51ca6f",
"text": "This paper describes first results using the Unified Medical Language System (UMLS) for distantly supervised relation extraction. UMLS is a large knowledge base which contains information about millions of medical concepts and relations between them. Our approach is evaluated using existing relation extraction data sets that contain relations that are similar to some of those in UMLS.",
"title": ""
},
{
"docid": "0df1a15c02c29d9462356641fbe78b43",
"text": "Localization is an essential and important research issue in wireless sensor networks (WSNs). Most localization schemes focus on static sensor networks. However, mobile sensors are required in some applications such that the sensed area can be enlarged. As such, a localization scheme designed for mobile sensor networks is necessary. In this paper, we propose a localization scheme to improve the localization accuracy of previous work. In this proposed scheme, the normal nodes without location information can estimate their own locations by gathering the positions of location-aware nodes (anchor nodes) and the one-hop normal nodes whose locations are estimated from the anchor nodes. In addition, we propose a scheme that predicts the moving direction of sensor nodes to increase localization accuracy. Simulation results show that the localization error in our proposed scheme is lower than the previous schemes in various mobility models and moving speeds.",
"title": ""
}
] | scidocsrr |
a94558043aadec25b546b7c275f808ed | Deformable Pose Traversal Convolution for 3D Action and Gesture Recognition | [
{
"docid": "1d6e23fedc5fa51b5125b984e4741529",
"text": "Human action recognition from well-segmented 3D skeleton data has been intensively studied and attracting an increasing attention. Online action detection goes one step further and is more challenging, which identifies the action type and localizes the action positions on the fly from the untrimmed stream. In this paper, we study the problem of online action detection from the streaming skeleton data. We propose a multi-task end-to-end Joint Classification-Regression Recurrent Neural Network to better explore the action type and temporal localization information. By employing a joint classification and regression optimization objective, this network is capable of automatically localizing the start and end points of actions more accurately. Specifically, by leveraging the merits of the deep Long Short-Term Memory (LSTM) subnetwork, the proposed model automatically captures the complex long-range temporal dynamics, which naturally avoids the typical sliding window design and thus ensures high computational efficiency. Furthermore, the subtask of regression optimization provides the ability to forecast the action prior to its occurrence. To evaluate our proposed model, we build a large streaming video dataset with annotations. Experimental results on our dataset and the public G3D dataset both demonstrate very promising performance of our scheme.",
"title": ""
},
{
"docid": "401b2494b8b032751c219726671cb48e",
"text": "Current state-of-the-art approaches to skeleton-based action recognition are mostly based on recurrent neural networks (RNN). In this paper, we propose a novel convolutional neural networks (CNN) based framework for both action classification and detection. Raw skeleton coordinates as well as skeleton motion are fed directly into CNN for label prediction. A novel skeleton transformer module is designed to rearrange and select important skeleton joints automatically. With a simple 7-layer network, we obtain 89.3% accuracy on validation set of the NTU RGB+D dataset. For action detection in untrimmed videos, we develop a window proposal network to extract temporal segment proposals, which are further classified within the same network. On the recent PKU-MMD dataset, we achieve 93.7% mAP, surpassing the baseline by a large margin.",
"title": ""
}
] | [
{
"docid": "901174e2dd911afada2e8ccf245d25f3",
"text": "This article presents the state of the art in passive devices for enhancing limb movement in people with neuromuscular disabilities. Both upper- and lower-limb projects and devices are described. Special emphasis is placed on a passive functional upper-limb orthosis called the Wilmington Robotic Exoskeleton (WREX). The development and testing of the WREX with children with limited arm strength are described. The exoskeleton has two links and 4 degrees of freedom. It uses linear elastic elements that balance the effects of gravity in three dimensions. The experiences of five children with arthrogryposis who used the WREX are described.",
"title": ""
},
{
"docid": "11557714ac3bbd9fc9618a590722212e",
"text": "In Taobao, the largest e-commerce platform in China, billions of items are provided and typically displayed with their images.For better user experience and business effectiveness, Click Through Rate (CTR) prediction in online advertising system exploits abundant user historical behaviors to identify whether a user is interested in a candidate ad. Enhancing behavior representations with user behavior images will help understand user's visual preference and improve the accuracy of CTR prediction greatly. So we propose to model user preference jointly with user behavior ID features and behavior images. However, training with user behavior images brings tens to hundreds of images in one sample, giving rise to a great challenge in both communication and computation. To handle these challenges, we propose a novel and efficient distributed machine learning paradigm called Advanced Model Server (AMS). With the well-known Parameter Server (PS) framework, each server node handles a separate part of parameters and updates them independently. AMS goes beyond this and is designed to be capable of learning a unified image descriptor model shared by all server nodes which embeds large images into low dimensional high level features before transmitting images to worker nodes. AMS thus dramatically reduces the communication load and enables the arduous joint training process. Based on AMS, the methods of effectively combining the images and ID features are carefully studied, and then we propose a Deep Image CTR Model. Our approach is shown to achieve significant improvements in both online and offline evaluations, and has been deployed in Taobao display advertising system serving the main traffic.",
"title": ""
},
{
"docid": "8994470e355b5db188090be731ee4fe9",
"text": "A system that allows museums to build and manage Virtual and Augmented Reality exhibitions based on 3D models of artifacts is presented. Dynamic content creation based on pre-designed visualization templates allows content designers to create virtual exhibitions very efficiently. Virtual Reality exhibitions can be presented both inside museums, e.g. on touch-screen displays installed inside galleries and, at the same time, on the Internet. Additionally, the presentation based on Augmented Reality technologies allows museum visitors to interact with the content in an intuitive and exciting manner.",
"title": ""
},
{
"docid": "557451621286ecd4fbf21909ff88450f",
"text": "BACKGROUND\nMany studies have demonstrated that honey has antibacterial activity in vitro, and a small number of clinical case studies have shown that application of honey to severely infected cutaneous wounds is capable of clearing infection from the wound and improving tissue healing. Research has also indicated that honey may possess anti-inflammatory activity and stimulate immune responses within a wound. The overall effect is to reduce infection and to enhance wound healing in burns, ulcers, and other cutaneous wounds. The objective of the study was to find out the results of topical wound dressings in diabetic wounds with natural honey.\n\n\nMETHODS\nThe study was conducted at department of Orthopaedics, Unit-1, Liaquat University of Medical and Health Sciences, Jamshoro from July 2006 to June 2007. Study design was experimental. The inclusion criteria were patients of either gender with any age group having diabetic foot Wagner type I, II, III and II. The exclusion criteria were patients not willing for studies and who needed urgent amputation due to deteriorating illness. Initially all wounds were washed thoroughly and necrotic tissues removed and dressings with honey were applied and continued up to healing of wounds.\n\n\nRESULTS\nTotal number of patients was 12 (14 feet). There were 8 males (66.67%) and 4 females (33.33%), 2 cases (16.67%) were presented with bilateral diabetic feet. The age range was 35 to 65 years (46 +/- 9.07 years). Amputations of big toe in 3 patients (25%), second and third toe ray in 2 patients (16.67%) and of fourth and fifth toes at the level of metatarsophalengeal joints were done in 3 patients (25%). One patient (8.33%) had below knee amputation.\n\n\nCONCLUSION\nIn our study we observed excellent results in treating diabetic wounds with dressings soaked with natural honey. The disability of diabetic foot patients was minimized by decreasing the rate of leg or foot amputations and thus enhancing the quality and productivity of individual life.",
"title": ""
},
{
"docid": "b24f07add0da3931b23f4a13ea6983b9",
"text": "Recently, with the development of artificial intelligence technologies and the popularity of mobile devices, walking detection and step counting have gained much attention since they play an important role in the fields of equipment positioning, saving energy, behavior recognition, etc. In this paper, a novel algorithm is proposed to simultaneously detect walking motion and count steps through unconstrained smartphones in the sense that the smartphone placement is not only arbitrary but also alterable. On account of the periodicity of the walking motion and sensitivity of gyroscopes, the proposed algorithm extracts the frequency domain features from three-dimensional (3D) angular velocities of a smartphone through FFT (fast Fourier transform) and identifies whether its holder is walking or not irrespective of its placement. Furthermore, the corresponding step frequency is recursively updated to evaluate the step count in real time. Extensive experiments are conducted by involving eight subjects and different walking scenarios in a realistic environment. It is shown that the proposed method achieves the precision of 93.76 % and recall of 93.65 % for walking detection, and its overall performance is significantly better than other well-known methods. Moreover, the accuracy of step counting by the proposed method is 95.74 % , and is better than both of the several well-known counterparts and commercial products.",
"title": ""
},
{
"docid": "e464cde1434026c17b06716c6a416b7a",
"text": "Three experiments supported the hypothesis that people are more willing to express attitudes that could be viewed as prejudiced when their past behavior has established their credentials as nonprejudiced persons. In Study 1, participants given the opportunity to disagree with blatantly sexist statements were later more willing to favor a man for a stereotypically male job. In Study 2, participants who first had the opportunity to select a member of a stereotyped group (a woman or an African American) for a category-neutral job were more likely to reject a member of that group for a job stereotypically suited for majority members. In Study 3, participants who had established credentials as nonprejudiced persons revealed a greater willingness to express a politically incorrect opinion even when the audience was unaware of their credentials. The general conditions under which people feel licensed to act on illicit motives are discussed.",
"title": ""
},
{
"docid": "4d42e42469fcead51969f3e642920abc",
"text": "In this paper, we present a dual-band antenna for Long Term Evolution (LTE) handsets. The proposed antenna is composed of a meandered monopole operating in the 700 MHz band and a parasitic element which radiates in the 2.5–2.7 GHz band. Two identical antennas are then closely positioned on the same 120×50 mm2 ground plane (Printed Circuit Board) which represents a modern-size PDA-mobile phone. To enhance the port-to-port isolation of the antennas, a neutralization technique is implemented between them. Scattering parameters, radiations patterns and total efficiencies are presented to illustrate the performance of the antenna-system.",
"title": ""
},
{
"docid": "fff89d9e97dbb5a13febe48c35d08c94",
"text": "The positive effects of social popularity (i.e., information based on other consumers’ behaviors) and deal scarcity (i.e., information provided by product vendors) on consumers’ consumption behaviors are well recognized. However, few studies have investigated their potential joint and interaction effects and how such effects may differ at different timing of a shopping process. This study examines the individual and interaction effects of social popularity and deal scarcity as well as how such effects change as consumers’ shopping goals become more concrete. The results of a laboratory experiment show that in the initial shopping stage when consumers do not have specific shopping goals, social popularity and deal scarcity information weaken each other’s effects; whereas in the later shopping stage when consumers have constructed concrete shopping goals, these two information cues reinforce each other’s effects. Implications on theory and practice are discussed.",
"title": ""
},
{
"docid": "d0e977ab137cd004420bda28bd0b11be",
"text": "This study investigates the roles of cohesion and coherence in evaluations of essay quality. Cohesion generally has a facilitative effect on text comprehension and is assumed to be related to essay coherence. By contrast, recent studies of essay writing have demonstrated that computational indices of cohesion are not predictive of evaluations of writing quality. This study investigates expert ratings of individual text features, including coherence, in order to examine their relation to evaluations of holistic essay quality. The results suggest that coherence is an important attribute of overall essay quality, but that expert raters evaluate coherence based on the absence of cohesive cues in the essays rather than their presence. This finding has important implications for text understanding and the role of coherence in writing quality.",
"title": ""
},
{
"docid": "733f5029329072adf5635f0b4d0ad1cb",
"text": "We present a new approach to scalable training of deep learning machines by incremental block training with intra-block parallel optimization to leverage data parallelism and blockwise model-update filtering to stabilize learning process. By using an implementation on a distributed GPU cluster with an MPI-based HPC machine learning framework to coordinate parallel job scheduling and collective communication, we have trained successfully deep bidirectional long short-term memory (LSTM) recurrent neural networks (RNNs) and fully-connected feed-forward deep neural networks (DNNs) for large vocabulary continuous speech recognition on two benchmark tasks, namely 309-hour Switchboard-I task and 1,860-hour \"Switch-board+Fisher\" task. We achieve almost linear speedup up to 16 GPU cards on LSTM task and 64 GPU cards on DNN task, with either no degradation or improved recognition accuracy in comparison with that of running a traditional mini-batch based stochastic gradient descent training on a single GPU.",
"title": ""
},
{
"docid": "08353c7d40a0df4909b09f2d3e5ab4fe",
"text": "Object detection has made great progress in the past few years along with the development of deep learning. However, most current object detection methods are resource hungry, which hinders their wide deployment to many resource restricted usages such as usages on always-on devices, battery-powered low-end devices, etc. This paper considers the resource and accuracy trade-off for resource-restricted usages during designing the whole object detection framework. Based on the deeply supervised object detection (DSOD) framework, we propose Tiny-DSOD dedicating to resource-restricted usages. Tiny-DSOD introduces two innovative and ultra-efficient architecture blocks: depthwise dense block (DDB) based backbone and depthwise feature-pyramid-network (D-FPN) based front-end. We conduct extensive experiments on three famous benchmarks (PASCAL VOC 2007, KITTI, and COCO), and compare Tiny-DSOD to the state-of-the-art ultra-efficient object detection solutions such as Tiny-YOLO, MobileNet-SSD (v1 & v2), SqueezeDet, Pelee, etc. Results show that Tiny-DSOD outperforms these solutions in all the three metrics (parameter-size, FLOPs, accuracy) in each comparison. For instance, Tiny-DSOD achieves 72.1% mAP with only 0.95M parameters and 1.06B FLOPs, which is by far the state-of-the-arts result with such a low resource requirement.∗",
"title": ""
},
{
"docid": "2665314258f4b7f59a55702166f59fcc",
"text": "In this paper, a wireless power transfer system with magnetically coupled resonators is studied. The idea to use metamaterials to enhance the coupling coefficient and the transfer efficiency is proposed and analyzed. With numerical calculations of a system with and without metamaterials, we show that the transfer efficiency can be improved with metamaterials.",
"title": ""
},
{
"docid": "be1c50de2963341423960ba0f59fbc1f",
"text": "Deep neural networks have been shown to be very successful at learning feature hierarchies in supervised learning tasks. Generative models, on the other hand, have benefited less from hierarchical models with multiple layers of latent variables. In this paper, we prove that hierarchical latent variable models do not take advantage of the hierarchical structure when trained with some existing variational methods, and provide some limitations on the kind of features existing models can learn. Finally we propose an alternative architecture that does not suffer from these limitations. Our model is able to learn highly interpretable and disentangled hierarchical features on several natural image datasets with no taskspecific regularization.",
"title": ""
},
{
"docid": "00602badbfba6bc97dffbdd6c5a2ae2d",
"text": "Accurately drawing 3D objects is difficult for untrained individuals, as it requires an understanding of perspective and its effects on geometry and proportions. Step-by-step tutorials break the complex task of sketching an entire object down into easy-to-follow steps that even a novice can follow. However, creating such tutorials requires expert knowledge and is time-consuming. As a result, the availability of tutorials for a given object or viewpoint is limited. How2Sketch (H2S) addresses this problem by automatically generating easy-to-follow tutorials for arbitrary 3D objects. Given a segmented 3D model and a camera viewpoint, H2S computes a sequence of steps for constructing a drawing scaffold comprised of geometric primitives, which helps the user draw the final contours in correct perspective and proportion. To make the drawing scaffold easy to construct, the algorithm solves for an ordering among the scaffolding primitives and explicitly makes small geometric modifications to the size and location of the object parts to simplify relative positioning. Technically, we formulate this scaffold construction as a single selection problem that simultaneously solves for the ordering and geometric changes of the primitives. We generate different tutorials on man-made objects using our method and evaluate how easily the tutorials can be followed with a user study.",
"title": ""
},
{
"docid": "efec2ff9384e17a698c88e742e41bcc9",
"text": "— A new versatile Hydraulically-powered Quadruped robot (HyQ) has been developed to serve as a platform to study not only highly dynamic motions such as running and jumping, but also careful navigation over very rough terrain. HyQ stands 1 meter tall, weighs roughly 90kg and features 12 torque-controlled joints powered by a combination of hydraulic and electric actuators. The hydraulic actuation permits the robot to perform powerful and dynamic motions that are hard to achieve with more traditional electrically actuated robots. This paper describes design and specifications of the robot and presents details on the hardware of the quadruped platform, such as the mechanical design of the four articulated legs and of the torso frame, and the configuration of the hydraulic power system. Results from the first walking experiments are presented along with test studies using a previously built prototype leg. 1 INTRODUCTION The development of mobile robotic platforms is an important and active area of research. Within this domain, the major focus has been to develop wheeled or tracked systems that cope very effectively with flat and well-structured solid surfaces (e.g. laboratories and roads). In recent years, there has been considerable success with robotic vehicles even for off-road conditions [1]. However, wheeled robots still have major limitations and difficulties in navigating uneven and rough terrain. These limitations and the capabilities of legged animals encouraged researchers for the past decades to focus on the construction of biologically inspired legged machines. These robots have the potential to outperform the more traditional designs with wheels and tracks in terms of mobility and versatility. The vast majority of the existing legged robots have been, and continue to be, actuated by electric motors with high gear-ratio reduction drives, which are popular because of their size, price, ease of use and accuracy of control. However, electric motors produce small torques relative to their size and weight, thereby making reduction drives with high ratios essential to convert velocity into torque. Unfortunately, this approach results in systems with reduced speed capability and limited passive back-driveability and therefore not very suitable for highly dynamic motions and interactions with unforeseen terrain variance. Significant examples of such legged robots are: the biped series of HRP robots [2], Toyota humanoid robot [3], and Honda's Asimo [4]; and the quadruped robot series of Hirose et al. [5], Sony's AIBO [6] and Little Dog [7]. In combination with high position gain control and …",
"title": ""
},
{
"docid": "01295570af41ff14f0b55d6fe7139c9d",
"text": "YES is a simplified stroke-based method for sorting Chinese characters. It is free from stroke counting and grouping, and thus much faster and more accurate than the traditional method. This paper presents a collation element table built in YES for a large joint Chinese character set covering (a) all 20,902 characters of Unicode CJK Unified Ideographs, (b) all 11,408 characters in the Complete List of Chinese Characters Used by the Media in 2013, (c) all 13,000 plus characters in the latest versions of Xinhua Dictionary(v11) and Contemporary Chinese Dictionary(v6). Of the 20,902 Chinese characters in Unicode, 97.23% have one-to-one relationship with their stroke order codes in YES, comparing with 90.69% of the traditional method. Enhanced with the secondary and tertiary sorting levels of stroke layout and Unicode value, there is a guarantee of one-to-one relationship between the characters and collation elements. The collation element table has been successfully applied to sorting CC-CEDICT, a Chinese-English dictionary of over 112,000 word entries.",
"title": ""
},
{
"docid": "dbe0b895c78dd90c69cc1a1f8289aadf",
"text": "This paper presents the design procedure of monolithic microwave integrated circuit (MMIC) high-power amplifiers (HPAs) as well as implementation of high-efficiency and compact-size HPAs in a 0.25- μm AlGaAs-InGaAs pHEMT technology. Presented design techniques used to extend bandwidth, improve efficiency, and reduce chip area of the HPAs are described in detail. The first HPA delivers 5 W of output power with 40% power-added efficiency (PAE) in the frequency band of 8.5-12.5 GHz, while providing 20 dB of small-signal gain. The second HPA delivers 8 W of output power with 35% PAE in the frequency band of 7.5-12 GHz, while maintaining a small-signal gain of 17.5 dB. The 8-W HPA chip area is 8.8 mm2, which leads to the maximum power/area ratio of 1.14 W/mm2. These are the lowest area and highest power/area ratio reported in GaAs HPAs operating within the same frequency band.",
"title": ""
},
{
"docid": "e8ef5dfb9aafb4a2b453ebdda6e923ea",
"text": "This paper addresses the problem of vegetation detection from laser measurements. The ability to detect vegetation is important for robots operating outdoors, since it enables a robot to navigate more efficiently and safely in such environments. In this paper, we propose a novel approach for detecting low, grass-like vegetation using laser remission values. In our algorithm, the laser remission is modeled as a function of distance, incidence angle, and material. We classify surface terrain based on 3D scans of the surroundings of the robot. The model is learned in a self-supervised way using vibration-based terrain classification. In all real world experiments we carried out, our approach yields a classification accuracy of over 99%. We furthermore illustrate how the learned classifier can improve the autonomous navigation capabilities of mobile robots.",
"title": ""
},
{
"docid": "2793f528a9b29345b1ee8ce1202933e3",
"text": "Neural Networks are prevalent in todays NLP research. Despite their success for different tasks, training time is relatively long. We use Hogwild! to counteract this phenomenon and show that it is a suitable method to speed up training Neural Networks of different architectures and complexity. For POS tagging and translation we report considerable speedups of training, especially for the latter. We show that Hogwild! can be an important tool for training complex NLP architectures.",
"title": ""
},
{
"docid": "884281b32a82a1d1f9811acc73257387",
"text": "The low power wide area network (LPWAN) technologies, which is now embracing a booming era with the development in the Internet of Things (IoT), may offer a brand new solution for current smart grid communications due to their excellent features of low power, long range, and high capacity. The mission-critical smart grid communications require secure and reliable connections between the utilities and the devices with high quality of service (QoS). This is difficult to achieve for unlicensed LPWAN technologies due to the crowded license-free band. Narrowband IoT (NB-IoT), as a licensed LPWAN technology, is developed based on the existing long-term evolution specifications and facilities. Thus, it is able to provide cellular-level QoS, and henceforth can be viewed as a promising candidate for smart grid communications. In this paper, we introduce NB-IoT to the smart grid and compare it with the existing representative communication technologies in the context of smart grid communications in terms of data rate, latency, range, etc. The overall requirements of communications in the smart grid from both quantitative and qualitative perspectives are comprehensively investigated and each of them is carefully examined for NB-IoT. We further explore the representative applications in the smart grid and analyze the corresponding feasibility of NB-IoT. Moreover, the performance of NB-IoT in typical scenarios of the smart grid communication environments, such as urban and rural areas, is carefully evaluated via Monte Carlo simulations.",
"title": ""
}
] | scidocsrr |
6289f60d651706a549de7eaded26b56d | Modeling data entry rates for ASR and alternative input methods | [
{
"docid": "b876e62db8a45ab17d3a9d217e223eb7",
"text": "A study was conducted to evaluate user performance andsatisfaction in completion of a set of text creation tasks usingthree commercially available continuous speech recognition systems.The study also compared user performance on similar tasks usingkeyboard input. One part of the study (Initial Use) involved 24users who enrolled, received training and carried out practicetasks, and then completed a set of transcription and compositiontasks in a single session. In a parallel effort (Extended Use),four researchers used speech recognition to carry out real worktasks over 10 sessions with each of the three speech recognitionsoftware products. This paper presents results from the Initial Usephase of the study along with some preliminary results from theExtended Use phase. We present details of the kinds of usabilityand system design problems likely in current systems and severalcommon patterns of error correction that we found.",
"title": ""
}
] | [
{
"docid": "a4e5a60d9ce417ef74fc70580837cd55",
"text": "Emotional processes are important to survive. The Darwinian adaptive concept of stress refers to natural selection since evolved individuals have acquired effective strategies to adapt to the environment and to unavoidable changes. If demands are abrupt and intense, there might be insufficient time to successful responses. Usually, stress produces a cognitive or perceptual evaluation (emotional memory) which motivates to make a plan, to take a decision and to perform an action to face success‐ fully the demand. Between several kinds of stresses, there are psychosocial and emotional stresses with cultural, social and political influences. The cultural changes have modified the way in which individuals socially interact. Deficits in familiar relationships and social isolation alter physical and mental health in young students, producing reduction of their capacities of facing stressors in school. Adolescence is characterized by significant physiological, anatomical, and psychological changes in boys and girls, who become vulnerable to psychiatric disorders. In particular for young adult students, anxiety and depression symptoms could interfere in their academic performance. In this chapter, we reviewed approaches to the study of anxiety and depression symptoms related with the academic performance in adolescent and graduate students. Results from available published studies in academic journals are reviewed to discuss the importance to detect information about academic performance, which leads to discover in many cases the very commonly subdiagnosed psychiatric disorders in adolescents, that is, anxiety and depression. With the reviewed evidence of how anxiety and depression in young adult students may alter their main activity in life (studying and academic performance), we © 2015 The Author(s). Licensee InTech. This chapter is distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/3.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. discussed data in order to show a way in which professionals involved in schools could support students and stablish a routine of intervention in any case.",
"title": ""
},
{
"docid": "e2c4f9cfce1db6282fe3a23fd5d6f3a4",
"text": "In semi-structured case-oriented business processes, the sequence of process steps is determined by case workers based on available document content associated with a case. Transitions between process execution steps are therefore case specific and depend on independent judgment of case workers. In this paper, we propose an instance-specific probabilistic process model (PPM) whose transition probabilities are customized to the semi-structured business process instance it represents. An instance-specific PPM serves as a powerful representation to predict the likelihood of different outcomes. We also show that certain instance-specific PPMs can be transformed into a Markov chain under some non-restrictive assumptions. For instance-specific PPMs that contain parallel execution of tasks, we provide an algorithm to map them to an extended space Markov chain. This way existing Markov techniques can be leveraged to make predictions about the likelihood of executing future tasks. Predictions provided by our technique could generate early alerts for case workers about the likelihood of important or undesired outcomes in an executing case instance. We have implemented and validated our approach on a simulated automobile insurance claims handling semi-structured business process. Results indicate that an instance-specific PPM provides more accurate predictions than other methods such as conditional probability. We also show that as more document data become available, the prediction accuracy of an instance-specific PPM increases.",
"title": ""
},
{
"docid": "4cf77462459efa81f6ed856655ae7454",
"text": "Antibody response to the influenza immunization was investigated in 83 1st-semester healthy university freshmen. Elevated levels of loneliness throughout the semester and small social networks were independently associated with poorer antibody response to 1 component of the vaccine. Those with both high levels of loneliness and a small social network had the lowest antibody response. Loneliness was also associated with greater psychological stress and negative affect, less positive affect, poorer sleep efficiency and quality, and elevations in circulating levels of cortisol. However, only the stress data were consistent with mediation of the loneliness-antibody response relation. None of these variables were associated with social network size, and hence none were potential mediators of the relation between network size and immunization response.",
"title": ""
},
{
"docid": "cba5c85ee9a9c4f97f99c1fcb35d0623",
"text": "Virtualized Cloud platforms have become increasingly common and the number of online services hosted on these platforms is also increasing rapidly. A key problem faced by providers in managing these services is detecting the performance anomalies and adjusting resources accordingly. As online services generate a very large amount of monitored data in the form of time series, it becomes very difficult to process this complex data by traditional approaches. In this work, we present a novel distributed parallel approach for performance anomaly detection. We build upon Holt-Winters forecasting for automatic aberrant behavior detection in time series. First, we extend the technique to work with MapReduce paradigm. Next, we correlate the anomalous metrics with the target Service Level Objective (SLO) in order to locate the suspicious metrics. We implemented and evaluated our approach on a production Cloud encompassing IaaS and PaaS service models. Experimental results confirm that our approach is efficient and effective in capturing the metrics causing performance anomalies in large time series datasets.",
"title": ""
},
{
"docid": "92c6e4ec2497c467eaa31546e2e2be0e",
"text": "The subjective sense of future time plays an essential role in human motivation. Gradually, time left becomes a better predictor than chronological age for a range of cognitive, emotional, and motivational variables. Socioemotional selectivity theory maintains that constraints on time horizons shift motivational priorities in such a way that the regulation of emotional states becomes more important than other types of goals. This motivational shift occurs with age but also appears in other contexts (for example, geographical relocations, illnesses, and war) that limit subjective future time.",
"title": ""
},
{
"docid": "ea3ed48d47473940134027caea2679f9",
"text": "With rapid development of face recognition and detection techniques, the face has been frequently used as a biometric to find illegitimate access. It relates to a security issues of system directly, and hence, the face spoofing detection is an important issue. However, correctly classifying spoofing or genuine faces is challenging due to diverse environment conditions such as brightness and color of a face skin. Therefore we propose a novel approach to robustly find the spoofing faces using the highlight removal effect, which is based on the reflection information. Because spoofing face image is recaptured by a camera, it has additional light information. It means that spoofing image could have much more highlighted areas and abnormal reflection information. By extracting these differences, we are able to generate features for robust face spoofing detection. In addition, the spoofing face image and genuine face image have distinct textures because of surface material of medium. The skin and spoofing medium are expected to have different texture, and some genuine image characteristics are distorted such as color distribution. We achieve state-of-the-art performance by concatenating these features. It significantly outperforms especially for the error rate.",
"title": ""
},
{
"docid": "a1a4b028fba02904333140e6791709bb",
"text": "Cross-site scripting (also referred to as XSS) is a vulnerability that allows an attacker to send malicious code (usually in the form of JavaScript) to another user. XSS is one of the top 10 vulnerabilities on Web application. While a traditional cross-site scripting vulnerability exploits server-side codes, DOM-based XSS is a type of vulnerability which affects the script code being executed in the clients browser. DOM-based XSS vulnerabilities are much harder to be detected than classic XSS vulnerabilities because they reside on the script codes from Web sites. An automated scanner needs to be able to execute the script code without errors and to monitor the execution of this code to detect such vulnerabilities. In this paper, we introduce a distributed scanning tool for crawling modern Web applications on a large scale and detecting, validating DOMbased XSS vulnerabilities. Very few Web vulnerability scanners can really accomplish this.",
"title": ""
},
{
"docid": "046245929e709ef2935c9413619ab3d7",
"text": "In recent years, there has been a growing intensity of competition in virtually all areas of business in both markets upstream for raw materials such as components, supplies, capital and technology and markets downstream for consumer goods and services. This paper examines the relationships among generic strategy, competitive advantage, and organizational performance. Firstly, the nature of generic strategies, competitive advantage, and organizational performance is examined. Secondly, the relationship between generic strategies and competitive advantage is analyzed. Finally, the implications of generic strategies, organizational performance, performance measures and competitive advantage are studied. This study focuses on: (i) the relationship of generic strategy and organisational performance in Australian manufacturing companies participating in the “Best Practice Program in Australia”, (ii) the relationship between generic strategies and competitive advantage, and (iii) the relationship among generic strategies, competitive advantage and organisational performance. 1999 Elsevier Science Ltd. All rights reserved.",
"title": ""
},
{
"docid": "d8484cc7973882777f65a28fcdbb37be",
"text": "The reported power analysis attacks on hardware implementations of the MICKEY family of streams ciphers require a large number of power traces. The primary motivation of our work is to break an implementation of the cipher when only a limited number of power traces can be acquired by an adversary. In this paper, we propose a novel approach to mount a Template attack (TA) on MICKEY-128 2.0 stream cipher using Particle Swarm Optimization (PSO) generated initialization vectors (IVs). In addition, we report the results of power analysis against a MICKEY-128 2.0 implementation on a SASEBO-GII board to demonstrate our proposed attack strategy. The captured power traces were analyzed using Least Squares Support Vector Machine (LS-SVM) learning algorithm based binary classifiers to segregate the power traces into the respective Hamming distance (HD) classes. The outcomes of the experiments reveal that our proposed power analysis attack strategy requires a much lesser number of IVs compared to a standard Correlation Power Analysis (CPA) attack on MICKEY-128 2.0 during the key loading phase of the cipher.",
"title": ""
},
{
"docid": "2a1eea68ab90c34fbe90e8f6ac28059e",
"text": "This article discusses how to avoid biased questions in survey instruments, how to motivate people to complete instruments and how to evaluate instruments. In the context of survey evaluation, we discuss how to assess survey reliability i.e. how reproducible a survey's data is and survey validity i.e. how well a survey instrument measures what it sets out to measure.",
"title": ""
},
{
"docid": "2d845ef6552b77fb4dd0d784233aa734",
"text": "The timing of the origin of arthropods in relation to the Cambrian explosion is still controversial, as are the timing of other arthropod macroevolutionary events such as the colonization of land and the evolution of flight. Here we assess the power of a phylogenomic approach to shed light on these major events in the evolutionary history of life on earth. Analyzing a large phylogenomic dataset (122 taxa, 62 genes) with a Bayesian-relaxed molecular clock, we simultaneously reconstructed the phylogenetic relationships and the absolute times of divergences among the arthropods. Simulations were used to test whether our analysis could distinguish between alternative Cambrian explosion scenarios with increasing levels of autocorrelated rate variation. Our analyses support previous phylogenomic hypotheses and simulations indicate a Precambrian origin of the arthropods. Our results provide insights into the 3 independent colonizations of land by arthropods and suggest that evolution of insect wings happened much earlier than the fossil record indicates, with flight evolving during a period of increasing oxygen levels and impressively large forests. These and other findings provide a foundation for macroevolutionary and comparative genomic study of Arthropoda.",
"title": ""
},
{
"docid": "f90cb4fdf664e24ceeb3727eda3543b3",
"text": "The self-powering, long-lasting, and functional features of embedded wireless microsensors appeal to an ever-expanding application space in monitoring, control, and diagnosis for military, commercial, industrial, space, and biomedical applications. Extended operational life, however, is difficult to achieve when power-intensive functions like telemetry draw whatever little energy is available from energy-storage microdevices like thin-film lithium-ion batteries and/or microscale fuel cells. Harvesting ambient energy overcomes this deficit by continually replenishing the energy reservoir and indefinitely extending system lifetime. In this paper, a prototyped circuit that precharges, detects, and synchronizes to a variable voltage-constrained capacitor verifies experimentally that harvesting energy electrostatically from vibrations is possible. Experimental results show that, on average (excluding gate-drive and control losses), the system harvests 9.7 nJ/cycle by investing 1.7 nJ/cycle, yielding a net energy gain of approximately 8 nJ/cycle at an average of 1.6 ¿W (in typical applications) for every 200 pF variation. Projecting and including reasonable gate-drive and controller losses reduces the net energy gain to 6.9 nJ/cycle at 1.38 ¿W.",
"title": ""
},
{
"docid": "b76f10452e4a4b0d7408e6350b263022",
"text": "In this paper, a Y-Δ hybrid connection for a high-voltage induction motor is described. Low winding harmonic content is achieved by careful consideration of the interaction between the Y- and Δ-connected three-phase winding sets so that the magnetomotive force (MMF) in the air gap is close to sinusoid. Essentially, the two winding sets operate in a six-phase mode. This paper goes on to verify that the fundamental distribution coefficient for the stator MMF is enhanced compared to a standard three-phase winding set. The design method for converting a conventional double-layer lap winding in a high-voltage induction motor into a Y-Δ hybrid lap winding is described using standard winding theory as often applied to small- and medium-sized motors. The main parameters addressed when designing the winding are the conductor wire gauge, coil turns, and parallel winding branches in the Y and Δ connections. A winding design scheme for a 1250-kW 6-kV induction motor is put forward and experimentally validated; the results show that the efficiency can be raised effectively without increasing the cost.",
"title": ""
},
{
"docid": "78c6ca3a62314b1033470a03c90619be",
"text": "Metabolomics is the comprehensive study of small molecule metabolites in biological systems. By assaying and analyzing thousands of metabolites in biological samples, it provides a whole picture of metabolic status and biochemical events happening within an organism and has become an increasingly powerful tool in the disease research. In metabolomics, it is common to deal with large amounts of data generated by nuclear magnetic resonance (NMR) and/or mass spectrometry (MS). Moreover, based on different goals and designs of studies, it may be necessary to use a variety of data analysis methods or a combination of them in order to obtain an accurate and comprehensive result. In this review, we intend to provide an overview of computational and statistical methods that are commonly applied to analyze metabolomics data. The review is divided into five sections. The first two sections will introduce the background and the databases and resources available for metabolomics research. The third section will briefly describe the principles of the two main experimental methods that produce metabolomics data: MS and NMR, followed by the fourth section that describes the preprocessing of the data from these two approaches. In the fifth and the most important section, we will review four main types of analysis that can be performed on metabolomics data with examples in metabolomics. These are unsupervised learning methods, supervised learning methods, pathway analysis methods and analysis of time course metabolomics data. We conclude by providing a table summarizing the principles and tools that we discussed in this review.",
"title": ""
},
{
"docid": "9292d1a97913257cfd1e72645969a988",
"text": "A digital PLL employing an adaptive tracking technique and a novel frequency acquisition scheme achieves a wide tracking range and fast frequency acquisition. The test chip fabricated in a 0.13 mum CMOS process operates from 0.6 GHz to 2 GHz and achieves better than plusmn3200 ppm frequency tracking range when the reference clock is modulated with a 1 MHz sine wave.",
"title": ""
},
{
"docid": "c3473e7fe7b46628d384cbbe10bfe74c",
"text": "STUDY OBJECTIVE\nTo (1) examine the prevalence of abnormal genital findings in a large cohort of female children presenting with concerns of sexual abuse; and (2) explore how children use language when describing genital contact and genital anatomy.\n\n\nDESIGN\nIn this prospective study we documented medical histories and genital findings in all children who met inclusion criteria. Findings were categorized as normal, indeterminate, and diagnostic of trauma. Logistic regression analysis was used to determine the effects of key covariates on predicting diagnostic findings. Children older than 4 years of age were asked questions related to genital anatomy to assess their use of language.\n\n\nSETTING\nA regional, university-affiliated sexual abuse clinic.\n\n\nPARTICIPANTS\nFemale children (N = 1500) aged from birth to 17 years (inclusive) who received an anogenital examination with digital images.\n\n\nINTERVENTIONS AND MAIN OUTCOME MEASURES\nPhysical exam findings, medical history, and the child's use of language were recorded.\n\n\nRESULTS\nPhysical findings were determined in 99% (n = 1491) of patients. Diagnostic findings were present in 7% (99 of 1491). After adjusting for age, acuity, and type of sexual contact reported by the adult, the estimated odds of diagnostic findings were 12.5 times higher for children reporting genital penetration compared with those who reported only contact (95% confidence interval, 3.46-45.34). Finally, children used the word \"inside\" to describe contact other than penetration of the vaginal canal (ie, labial penetration).\n\n\nCONCLUSION\nA history of penetration by the child was the primary predictor of diagnostic findings. Interpretation of children's use of \"inside\" might explain the low prevalence of diagnostic findings and warrants further study.",
"title": ""
},
{
"docid": "3e94030eb03806d79c5e66aa90408fbb",
"text": "The sampling rate of the sensors in wireless sensor networks (WSNs) determines the rate of its energy consumption since most of the energy is used in sampling and transmission. To save the energy in WSNs and thus prolong the network lifetime, we present a novel approach based on the compressive sensing (CS) framework to monitor 1-D environmental information in WSNs. The proposed technique is based on CS theory to minimize the number of samples taken by sensor nodes. An innovative feature of our approach is a new random sampling scheme that considers the causality of sampling, hardware limitations and the trade-off between the randomization scheme and computational complexity. In addition, a sampling rate indicator (SRI) feedback scheme is proposed to enable the sensor to adjust its sampling rate to maintain an acceptable reconstruction performance while minimizing the number of samples. A significant reduction in the number of samples required to achieve acceptable reconstruction error is demonstrated using real data gathered by a WSN located in the Hessle Anchorage of the Humber Bridge.",
"title": ""
},
{
"docid": "c94c9913634f715049d90a55282908ca",
"text": "Indirect field oriented control for induction machine requires the knowledge of rotor time constant to estimate the rotor flux linkages. Here an online method for estimating the rotor time constant and stator resistance is presented. The problem is formulated as a nonlinear least-squares problem and a procedure is presented that guarantees the minimum is found in a finite number of steps. Experimental results are presented. Two different approaches to implementing the algorithm online are discussed. Simulations are also presented to show how the algorithm works online",
"title": ""
},
{
"docid": "670ade2a60809bd501b3d365d173f4ab",
"text": "Attack graph is a tool to analyze multi-stage, multi-host attack scenarios in a network. It is a complete graph where each attack scenario is depicted by an attack path which is essentially a series of exploits. Each exploit in the series satisfies the pre-conditions for subsequent exploits and makes a casual relationship among them. One of the intrinsic problem with the generation of such a full attack graph is its scalability. In this work, an approach based on planner has been proposed for time-efficient scalable representation of the attack graphs. A planner is a special purpose search algorithm from artificial intelligence domain, used for finding out solutions within a large state space without suffering state space explosion. A case study has also been presented and the proposed methodology is found to be efficient than some of the earlier reported works.",
"title": ""
}
] | scidocsrr |
841fc2f45374901757ef197cf666e2e9 | Perceived learning environment and students ’ emotional experiences : A multilevel analysis of mathematics classrooms * | [
{
"docid": "e47276a0b7139e31266d032bb3a0cbfc",
"text": "We assessed math anxiety in 6ththrough 12th-grade children (N = 564) as part of a comprehensive longitudinal investigation of children's beliefs, attitudes, and values concerning mathematics. Confirmatory factor analyses provided evidence for two components of math anxiety, a negative affective reactions component and a cognitive component. The affective component of math anxiety related more strongly and negatively than did the worry component to children's ability perceptions, performance perceptions, and math performance. The worry component related more strongly and positively than did the affective component to the importance that children attach to math and their reported actual effort in math. Girls reported stronger negative affective reactions to math than did boys. Ninth-grade students reported experiencing the most worry about math and sixth graders the least.",
"title": ""
},
{
"docid": "db422d1fcb99b941a43e524f5f2897c2",
"text": "AN INDIVIDUAL CORRELATION is a correlation in which the statistical object or thing described is indivisible. The correlation between color and illiteracy for persons in the United States, shown later in Table I, is an individual correlation, because the kind of thing described is an indivisible unit, a person. In an individual correlation the variables are descriptive properties of individuals, such as height, income, eye color, or race, and not descriptive statistical constants such as rates or means. In an ecological correlation the statistical object is a group of persons. The correlation between the percentage of the population which is Negro and the percentage of the population which is illiterate for the 48 states, shown later as Figure 2, is an ecological correlation. The thing described is the population of a state, and not a single individual. The variables are percentages, descriptive properties of groups, and not descriptive properties of individuals. Ecological correlations are used in an impressive number of quantitative sociological studies, some of which by now have attained the status of classics: Cowles’ ‘‘Statistical Study of Climate in Relation to Pulmonary Tuberculosis’’; Gosnell’s ‘‘Analysis of the 1932 Presidential Vote in Chicago,’’ Factorial and Correlational Analysis of the 1934 Vote in Chicago,’’ and the more elaborate factor analysis in Machine Politics; Ogburn’s ‘‘How women vote,’’ ‘‘Measurement of the Factors in the Presidential Election of 1928,’’ ‘‘Factors in the Variation of Crime Among Cities,’’ and Groves and Ogburn’s correlation analyses in American Marriage and Family Relationships; Ross’ study of school attendance in Texas; Shaw’s Delinquency Areas study of the correlates of delinquency, as well as The more recent analyses in Juvenile Delinquency in Urban Areas; Thompson’s ‘‘Some Factors Influencing the Ratios of Children to Women in American Cities, 1930’’; Whelpton’s study of the correlates of birth rates, in ‘‘Geographic and Economic Differentials in Fertility;’’ and White’s ‘‘The Relation of Felonies to Environmental Factors in Indianapolis.’’ Although these studies and scores like them depend upon ecological correlations, it is not because their authors are interested in correlations between the properties of areas as such. Even out-and-out ecologists, in studying delinquency, for example, rely primarily upon data describing individuals, not areas. In each study which uses ecological correlations, the obvious purpose is to discover something about the behavior of individuals. Ecological correlations are used simply because correlations between the properties of individuals are not available. In each instance, however, the substitution is made tacitly rather than explicitly. The purpose of this paper is to clarify the ecological correlation problem by stating, mathematically, the exact relation between ecological and individual correlations, and by showing the bearing of that relation upon the practice of using ecological correlations as substitutes for individual correlations.",
"title": ""
},
{
"docid": "f71d0084ebb315a346b52c7630f36fb2",
"text": "A theory of motivation and emotion is proposed in which causal ascriptions play a key role. It is first documented that in achievement-related contexts there are a few dominant causal perceptions. The perceived causes of success and failure share three common properties: locus, stability, and controllability, with intentionality and globality as other possible causal structures. The perceived stability of causes influences changes in expectancy of success; all three dimensions of causality affect a variety of common emotional experiences, including anger, gratitude, guilt, hopelessness, pity, pride, and shame. Expectancy and affect, in turn, are presumed to guide motivated behavior. The theory therefore relates the structure of thinking to the dynamics of feeling and action. Analysis of a created motivational episode involving achievement strivings is offered, and numerous empirical observations are examined from this theoretical position. The strength of the empirical evidence, the capability of this theory to address prevalent human emotions, and the potential generality of the conception are stressed.",
"title": ""
}
] | [
{
"docid": "264aa89aa10fe05cff2f0e1a239e79ff",
"text": "While the terminology has changed over time, the basic concept of the Digital Twin model has remained fairly stable from its inception in 2001. It is based on the idea that a digital informational construct about a physical system could be created as an entity on its own. This digital information would be a “twin” of the information that was embedded within the physical system itself and be linked with that physical system through the entire lifecycle of the system.",
"title": ""
},
{
"docid": "fce170ad2238ad6066c9e17a3a388e7d",
"text": "Language resources that systematically organize paraphrases for binary relations are of great value for various NLP tasks and have recently been advanced in projects like PATTY, WiseNet and DEFIE. This paper presents a new method for building such a resource and the resource itself, called POLY. Starting with a very large collection of multilingual sentences parsed into triples of phrases, our method clusters relational phrases using probabilistic measures. We judiciously leverage fine-grained semantic typing of relational arguments for identifying synonymous phrases. The evaluation of POLY shows significant improvements in precision and recall over the prior works on PATTY and DEFIE. An extrinsic use case demonstrates the benefits of POLY for question answering.",
"title": ""
},
{
"docid": "d8ce92b054fc425a5db5bf17a62c6308",
"text": "The possibility that wind turbine noise (WTN) affects human health remains controversial. The current analysis presents results related to WTN annoyance reported by randomly selected participants (606 males, 632 females), aged 18-79, living between 0.25 and 11.22 km from wind turbines. WTN levels reached 46 dB, and for each 5 dB increase in WTN levels, the odds of reporting to be either very or extremely (i.e., highly) annoyed increased by 2.60 [95% confidence interval: (1.92, 3.58), p < 0.0001]. Multiple regression models had R(2)'s up to 58%, with approximately 9% attributed to WTN level. Variables associated with WTN annoyance included, but were not limited to, other wind turbine-related annoyances, personal benefit, noise sensitivity, physical safety concerns, property ownership, and province. Annoyance was related to several reported measures of health and well-being, although these associations were statistically weak (R(2 )< 9%), independent of WTN levels, and not retained in multiple regression models. The role of community tolerance level as a complement and/or an alternative to multiple regression in predicting the prevalence of WTN annoyance is also provided. The analysis suggests that communities are between 11 and 26 dB less tolerant of WTN than of other transportation noise sources.",
"title": ""
},
{
"docid": "7b4dd695182f7e15e58f44e309bf897c",
"text": "Phosphorus is one of the most abundant elements preserved in earth, and it comprises a fraction of ∼0.1% of the earth crust. In general, phosphorus has several allotropes, and the two most commonly seen allotropes, i.e. white and red phosphorus, are widely used in explosives and safety matches. In addition, black phosphorus, though rarely mentioned, is a layered semiconductor and has great potential in optical and electronic applications. Remarkably, this layered material can be reduced to one single atomic layer in the vertical direction owing to the van der Waals structure, and is known as phosphorene, in which the physical properties can be tremendously different from its bulk counterpart. In this review article, we trace back to the research history on black phosphorus of over 100 years from the synthesis to material properties, and extend the topic from black phosphorus to phosphorene. The physical and transport properties are highlighted for further applications in electronic and optoelectronics devices.",
"title": ""
},
{
"docid": "c7808ecbca4c5bf8e8093dce4d8f1ea7",
"text": "41 Abstract— This project deals with a design and motion planning algorithm of a caterpillar-based pipeline robot that can be used for inspection of 80–100-mm pipelines in an indoor pipeline environment. The robot system consists of a Robot body, a control system, a CMOS camera, an accelerometer, a temperature sensor, a ZigBee module. The robot module will be designed with the help of CAD tool. The control system consists of Atmega16 micro controller and Atmel studio IDE. The robot system uses a differential drive to steer the robot and spring loaded four-bar mechanisms to assure that the robot expands to have grip of the pipe walls. Unique features of this robot are the caterpillar wheel, the four-bar mechanism supports the well grip of wall, a simple and easy user interface.",
"title": ""
},
{
"docid": "d3fda1730c1297ed3b63a1d4f133d893",
"text": "Registered nurses were queried about their knowledge and attitudes regarding pain management. Results suggest knowledge of pain management principles and interventions is insufficient.",
"title": ""
},
{
"docid": "42cfea27f8dcda6c58d2ae0e86f2fb1a",
"text": "Most of the lane marking detection algorithms reported in the literature are suitable for highway scenarios. This paper presents a novel clustered particle filter based approach to lane detection, which is suitable for urban streets in normal traffic conditions. Furthermore, a quality measure for the detection is calculated as a measure of reliability. The core of this approach is the usage of weak models, i.e. the avoidance of strong assumptions about the road geometry. Experiments were carried out in Sydney urban areas with a vehicle mounted laser range scanner and a ccd camera. Through experimentations, we have shown that a clustered particle filter can be used to efficiently extract lane markings.",
"title": ""
},
{
"docid": "76d22feb7da3dbc14688b0d999631169",
"text": "Guilt proneness is a personality trait indicative of a predisposition to experience negative feelings about personal wrongdoing, even when the wrongdoing is private. It is characterized by the anticipation of feeling bad about committing transgressions rather than by guilty feelings in a particular moment or generalized guilty feelings that occur without an eliciting event. Our research has revealed that guilt proneness is an important character trait because knowing a person’s level of guilt proneness helps us to predict the likelihood that they will behave unethically. For example, online studies of adults across the U.S. have shown that people who score high in guilt proneness (compared to low scorers) make fewer unethical business decisions, commit fewer delinquent behaviors, and behave more honestly when they make economic decisions. In the workplace, guilt-prone employees are less likely to engage in counterproductive behaviors that harm their organization.",
"title": ""
},
{
"docid": "2d0cc4c7ca6272200bb1ed1c9bba45f0",
"text": "Advanced Driver Assistance Systems (ADAS) based on video camera tends to be generalized in today's automotive. However, if most of these systems perform nicely in good weather conditions, they perform very poorly under adverse weather particularly under rain. We present a novel approach that aims at detecting raindrops on a car windshield using only images from an in-vehicle camera. Based on the photometric properties of raindrops, the algorithm relies on image processing technics to highlight raindrops. Its results can be further used for image restoration and vision enhancement and hence it is a valuable tool for ADAS.",
"title": ""
},
{
"docid": "bc1f7e30b8dcef97c1d8de2db801c4f6",
"text": "In this paper a novel method is introduced based on the use of an unsupervised version of kernel least mean square (KLMS) algorithm for solving ordinary differential equations (ODEs). The algorithm is unsupervised because here no desired signal needs to be determined by user and the output of the model is generated by iterating the algorithm progressively. However, there are several new implementation, fast convergence and also little error. Furthermore, it is also a KLMS with obvious characteristics. In this paper the ability of KLMS is used to estimate the answer of ODE. First a trial solution of ODE is written as a sum of two parts, the first part satisfies the initial condition and the second part is trained using the KLMS algorithm so as the trial solution solves the ODE. The accuracy of the method is illustrated by solving several problems. Also the sensitivity of the convergence is analyzed by changing the step size parameters and kernel functions. Finally, the proposed method is compared with neuro-fuzzy [21] approach. Crown Copyright & 2011 Published by Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "042431e96028ed9729e6b174a78d642d",
"text": "We address the problem of multi-class classification in the case where the number of classes is very large. We propose a double sampling strategy on top of a multi-class to binary reduction strategy, which transforms the original multi-class problem into a binary classification problem over pairs of examples. The aim of the sampling strategy is to overcome the curse of long-tailed class distributions exhibited in majority of large-scale multi-class classification problems and to reduce the number of pairs of examples in the expanded data. We show that this strategy does not alter the consistency of the empirical risk minimization principle defined over the double sample reduction. Experiments are carried out on DMOZ and Wikipedia collections with 10,000 to 100,000 classes where we show the efficiency of the proposed approach in terms of training and prediction time, memory consumption, and predictive performance with respect to state-of-the-art approaches.",
"title": ""
},
{
"docid": "a5ace543a0e9b87d54cbe77c6a86c40f",
"text": "Packet capture is an essential function for many network applications. However, packet drop is a major problem with packet capture in high-speed networks. This paper presents WireCAP, a novel packet capture engine for commodity network interface cards (NICs) in high-speed networks. WireCAP provides lossless zero-copy packet capture and delivery services by exploiting multi-queue NICs and multicore architectures. WireCAP introduces two new mechanisms-the ring-buffer-pool mechanism and the buddy-group-based offloading mechanism-to address the packet drop problem of packet capture in high-speed network. WireCAP is efficient. It also facilitates the design and operation of a user-space packet-processing application. Experiments have demonstrated that WireCAP achieves better packet capture performance when compared to existing packet capture engines.\n In addition, WireCAP implements a packet transmit function that allows captured packets to be forwarded, potentially after the packets are modified or inspected in flight. Therefore, WireCAP can be used to support middlebox-type applications. Thus, at a high level, WireCAP provides a new packet I/O framework for commodity NICs in high-speed networks.",
"title": ""
},
{
"docid": "a87da46ab4026c566e3e42a5695fd8c9",
"text": "Micro aerial vehicles (MAVs) are an excellent platform for autonomous exploration. Most MAVs rely mainly on cameras for buliding a map of the 3D environment. Therefore, vision-based MAVs require an efficient exploration algorithm to select viewpoints that provide informative measurements. In this paper, we propose an exploration approach that selects in real time the next-best-view that maximizes the expected information gain of new measurements. In addition, we take into account the cost of reaching a new viewpoint in terms of distance and predictability of the flight path for a human observer. Finally, our approach selects a path that reduces the risk of crashes when the expected battery life comes to an end, while still maximizing the information gain in the process. We implemented and thoroughly tested our approach and the experiments show that it offers an improved performance compared to other state-of-the-art algorithms in terms of precision of the reconstruction, execution time, and smoothness of the path.",
"title": ""
},
{
"docid": "2f5d428b8da4d5b5009729fc1794e53d",
"text": "The resolution of a synthetic aperture radar (SAR) image, in range and azimuth, is determined by the transmitted bandwidth and the synthetic aperture length, respectively. Various superresolution techniques for improving resolution have been proposed, and we have proposed an algorithm that we call polarimetric bandwidth extrapolation (PBWE). To apply PBWE to a radar image, one needs to first apply PBWE in the range direction and then in the azimuth direction, or vice versa . In this paper, PBWE is further extended to the 2-D case. This extended case (2D-PBWE) utilizes a 2-D polarimetric linear prediction model and expands the spatial frequency bandwidth in range and azimuth directions simultaneously. The performance of the 2D-PBWE is shown through a simulated radar image and a real polarimetric SAR image",
"title": ""
},
{
"docid": "3a75cf54ace0ebb56b985e1452151a91",
"text": "Ubiquitous networks support the roaming service for mobile communication devices. The mobile user can use the services in the foreign network with the help of the home network. Mutual authentication plays an important role in the roaming services, and researchers put their interests on the authentication schemes. Recently, in 2016, Gope and Hwang found that mutual authentication scheme of He et al. for global mobility networks had security disadvantages such as vulnerability to forgery attacks, unfair key agreement, and destitution of user anonymity. Then, they presented an improved scheme. However, we find that the scheme cannot resist the off-line guessing attack and the de-synchronization attack. Also, it lacks strong forward security. Moreover, the session key is known to HA in that scheme. To get over the weaknesses, we propose a new two-factor authentication scheme for global mobility networks. We use formal proof with random oracle model, formal verification with the tool Proverif, and informal analysis to demonstrate the security of the proposed scheme. Compared with some very recent schemes, our scheme is more applicable. Copyright © 2016 John Wiley & Sons, Ltd.",
"title": ""
},
{
"docid": "56bad8cef0c8ed0af6882dbc945298ef",
"text": "We describe a new class of learning models called memory networks. Memory networks reason with inference components combined with a long-term memory component; they learn how to use these jointly. The long-term memory can be read and written to, with the goal of using it for prediction. We investigate these models in the context of question answering (QA) where the long-term memory effectively acts as a (dynamic) knowledge base, and the output is a textual response. We evaluate them on a large-scale QA task, and a smaller, but more complex, toy task generated from a simulated world. In the latter, we show the reasoning power of such models by chaining multiple supporting sentences to answer questions that require understanding the intension of verbs.",
"title": ""
},
{
"docid": "f5ba54c76166eed39da96f86a8bbd2a1",
"text": "The digital divide refers to the separation between those who have access to digital information and communications technology (ICT) and those who do not. Many believe that universal access to ICT would bring about a global community of interaction, commerce, and learning resulting in higher standards of living and improved social welfare. However, the digital divide threatens this outcome, leading many public policy makers to debate the best way to bridge the divide. Much of the research on the digital divide focuses on first order effects regarding who has access to the technology, but some work addresses the second order effects of inequality in the ability to use the technology among those who do have access. In this paper, we examine both first and second order effects of the digital divide at three levels of analysis the individual level, the organizational level, and the global level. At each level, we survey the existing research noting the theoretical perspective taken in the work, the research methodology employed, and the key results that were obtained. We then suggest a series of research questions at each level of analysis to guide researchers seeking to further examine the digital divide and how it impacts citizens, managers, and economies.",
"title": ""
},
{
"docid": "258e931d5c8d94f73be41cbb0058f49b",
"text": "VerSum allows lightweight clients to outsource expensive computations over large and frequently changing data structures, such as the Bitcoin or Namecoin blockchains, or a Certificate Transparency log. VerSum clients ensure that the output is correct by comparing the outputs from multiple servers. VerSum assumes that at least one server is honest, and crucially, when servers disagree, VerSum uses an efficient conflict resolution protocol to determine which server(s) made a mistake and thus obtain the correct output.\n VerSum's contribution lies in achieving low server-side overhead for both incremental re-computation and conflict resolution, using three key ideas: (1) representing the computation as a functional program, which allows memoization of previous results; (2) recording the evaluation trace of the functional program in a carefully designed computation history to help clients determine which server made a mistake; and (3) introducing a new authenticated data structure for sequences, called SeqHash, that makes it efficient for servers to construct summaries of computation histories in the presence of incremental re-computation. Experimental results with an implementation of VerSum show that VerSum can be used for a variety of computations, that it can support many clients, and that it can easily keep up with Bitcoin's rate of new blocks with transactions.",
"title": ""
},
{
"docid": "43ca9719740147e88e86452bb42f5644",
"text": "Currently in the US, over 97% of food waste is estimated to be buried in landfills. There is nonetheless interest in strategies to divert this waste from landfills as evidenced by a number of programs and policies at the local and state levels, including collection programs for source separated organic wastes (SSO). The objective of this study was to characterize the state-of-the-practice of food waste treatment alternatives in the US and Canada. Site visits were conducted to aerobic composting and two anaerobic digestion facilities, in addition to meetings with officials that are responsible for program implementation and financing. The technology to produce useful products from either aerobic or anaerobic treatment of SSO is in place. However, there are a number of implementation issues that must be addressed, principally project economics and feedstock purity. Project economics varied by region based on landfill disposal fees. Feedstock purity can be obtained by enforcement of contaminant standards and/or manual or mechanical sorting of the feedstock prior to and after treatment. Future SSO diversion will be governed by economics and policy incentives, including landfill organics bans and climate change mitigation policies.",
"title": ""
},
{
"docid": "c7b9c324171d40cec24ed089933a06ce",
"text": "With the proliferation of the internet and increased global access to online media, cybercrime is also occurring at an increasing rate. Currently, both personal users and companies are vulnerable to cybercrime. A number of tools including firewalls and Intrusion Detection Systems (IDS) can be used as defense mechanisms. A firewall acts as a checkpoint which allows packets to pass through according to predetermined conditions. In extreme cases, it may even disconnect all network traffic. An IDS, on the other hand, automates the monitoring process in computer networks. The streaming nature of data in computer networks poses a significant challenge in building IDS. In this paper, a method is proposed to overcome this problem by performing online classification on datasets. In doing so, an incremental naive Bayesian classifier is employed. Furthermore, active learning enables solving the problem using a small set of labeled data points which are often very expensive to acquire. The proposed method includes two groups of actions i.e. offline and online. The former involves data preprocessing while the latter introduces the NADAL online method. The proposed method is compared to the incremental naive Bayesian classifier using the NSL-KDD standard dataset. There are three advantages with the proposed method: (1) overcoming the streaming data challenge; (2) reducing the high cost associated with instance labeling; and (3) improved accuracy and Kappa compared to the incremental naive Bayesian approach. Thus, the method is well-suited to IDS applications.",
"title": ""
}
] | scidocsrr |
b0fe005c63685b8e6c294dd475fc55e9 | BilBOWA: Fast Bilingual Distributed Representations without Word Alignments | [
{
"docid": "3bb905351ce1ea2150f37059ed256a90",
"text": "A major assumption in many machine learning and data mining algorithms is that the training and future data must be in the same feature space and have the same distribution. However, in many real-world applications, this assumption may not hold. For example, we sometimes have a classification task in one domain of interest, but we only have sufficient training data in another domain of interest, where the latter data may be in a different feature space or follow a different data distribution. In such cases, knowledge transfer, if done successfully, would greatly improve the performance of learning by avoiding much expensive data-labeling efforts. In recent years, transfer learning has emerged as a new learning framework to address this problem. This survey focuses on categorizing and reviewing the current progress on transfer learning for classification, regression, and clustering problems. In this survey, we discuss the relationship between transfer learning and other related machine learning techniques such as domain adaptation, multitask learning and sample selection bias, as well as covariate shift. We also explore some potential future issues in transfer learning research.",
"title": ""
},
{
"docid": "09df260d26638f84ec3bd309786a8080",
"text": "If we take an existing supervised NLP system, a simple and general way to improve accuracy is to use unsupervised word representations as extra word features. We evaluate Brown clusters, Collobert and Weston (2008) embeddings, and HLBL (Mnih & Hinton, 2009) embeddings of words on both NER and chunking. We use near state-of-the-art supervised baselines, and find that each of the three word representations improves the accuracy of these baselines. We find further improvements by combining different word representations. You can download our word features, for off-the-shelf use in existing NLP systems, as well as our code, here: http://metaoptimize. com/projects/wordreprs/",
"title": ""
},
{
"docid": "8acd410ff0757423d09928093e7e8f63",
"text": "We present a simple log-linear reparameterization of IBM Model 2 that overcomes problems arising from Model 1’s strong assumptions and Model 2’s overparameterization. Efficient inference, likelihood evaluation, and parameter estimation algorithms are provided. Training the model is consistently ten times faster than Model 4. On three large-scale translation tasks, systems built using our alignment model outperform IBM Model 4. An open-source implementation of the alignment model described in this paper is available from http://github.com/clab/fast align .",
"title": ""
}
] | [
{
"docid": "2639c6ed94ad68f5e0c4579f84f52f35",
"text": "This article introduces the Swiss Army Menu (SAM), a radial menu that enables a very large number of functions on a single small tactile screen. The design of SAM relies on four different kinds of items, support for navigating in hierarchies of items and a control based on small thumb movements. SAM can thus offer a set of functions so large that it would typically have required a number of widgets that could not have been displayed in a single viewport at the same time.",
"title": ""
},
{
"docid": "feca1bd8b881f3d550f0f0912913081f",
"text": "There is an ever-increasing interest in the development of automatic medical diagnosis systems due to the advancement in computing technology and also to improve the service by medical community. The knowledge about health and disease is required for reliable and accurate medical diagnosis. Diabetic Retinopathy (DR) is one of the most common causes of blindness and it can be prevented if detected and treated early. DR has different signs and the most distinctive are microaneurysm and haemorrhage which are dark lesions and hard exudates and cotton wool spots which are bright lesions. Location and structure of blood vessels and optic disk play important role in accurate detection and classification of dark and bright lesions for early detection of DR. In this article, we propose a computer aided system for the early detection of DR. The article presents algorithms for retinal image preprocessing, blood vessel enhancement and segmentation and optic disk localization and detection which eventually lead to detection of different DR lesions using proposed hybrid fuzzy classifier. The developed methods are tested on four different publicly available databases. The presented methods are compared with recently published methods and the results show that presented methods outperform all others.",
"title": ""
},
{
"docid": "ea048488791219be809072862a061444",
"text": "Our object oriented programming approach have great ability to improve the programming behavior for modern system and software engineering but it does not give the proper interaction of real world .In real world , programming required powerful interlinking among properties and characteristics towards the various objects. Basically this approach of programming gives the better presentation of object with real world and provide the better relationship among the objects. I have explained the new concept of my neuro object oriented approach .This approach contains many new features like originty , new concept of inheritance , new concept of encapsulation , object relation with dimensions , originty relation with dimensions and time , category of NOOPA like high order thinking object and low order thinking object , differentiation model for achieving the various requirements from the user and a rotational model .",
"title": ""
},
{
"docid": "24632f6891d12600619e4bf7f9a444d1",
"text": "Product recommender systems are often deployed by e-commerce websites to improve user experience and increase sales. However, recommendation is limited by the product information hosted in those e-commerce sites and is only triggered when users are performing e-commerce activities. In this paper, we develop a novel product recommender system called METIS, a MErchanT Intelligence recommender System, which detects users' purchase intents from their microblogs in near real-time and makes product recommendation based on matching the users' demographic information extracted from their public profiles with product demographics learned from microblogs and online reviews. METIS distinguishes itself from traditional product recommender systems in the following aspects: 1) METIS was developed based on a microblogging service platform. As such, it is not limited by the information available in any specific e-commerce website. In addition, METIS is able to track users' purchase intents in near real-time and make recommendations accordingly. 2) In METIS, product recommendation is framed as a learning to rank problem. Users' characteristics extracted from their public profiles in microblogs and products' demographics learned from both online product reviews and microblogs are fed into learning to rank algorithms for product recommendation. We have evaluated our system in a large dataset crawled from Sina Weibo. The experimental results have verified the feasibility and effectiveness of our system. We have also made a demo version of our system publicly available and have implemented a live system which allows registered users to receive recommendations in real time.",
"title": ""
},
{
"docid": "9817009ca281ae09baf45b5f8bdef87d",
"text": "The rise of graph-structured data such as social networks, regulatory networks, citation graphs, and functional brain networks, in combination with resounding success of deep learning in various applications, has brought the interest in generalizing deep learning models to non-Euclidean domains. In this paper, we introduce a new spectral domain convolutional architecture for deep learning on graphs. The core ingredient of our model is a new class of parametric rational complex functions (Cayley polynomials) allowing to efficiently compute spectral filters on graphs that specialize on frequency bands of interest. Our model generates rich spectral filters that are localized in space, scales linearly with the size of the input data for sparsely-connected graphs, and can handle different constructions of Laplacian operators. Extensive experimental results show the superior performance of our approach on spectral image classification, community detection, vertex classification and matrix completion tasks.",
"title": ""
},
{
"docid": "4290b4ba8000aeaf24cd7fb8640b4570",
"text": "Drawing on semi-structured interviews and cognitive mapping with 14 craftspeople, this paper analyzes the socio-technical arrangements of people and tools in the context of workspaces and productivity. Using actor-network theory and the concept of companionability, both of which emphasize the role of human and non-human actants in the socio-technical fabrics of everyday life, I analyze the relationships between people, productivity and technology through the following themes: embodiment, provenance, insecurity, flow and companionability. The discussion section develops these themes further through comparison with rhetoric surrounding the Internet of Things (IoT). By putting the experiences of craftspeople in conversation with IoT rhetoric, I suggest several policy interventions for understanding connectivity and inter-device operability as material, flexible and respectful of human agency.",
"title": ""
},
{
"docid": "4782e5fb1044fa5f6a54cf8130f8f6fb",
"text": "Despite significant progress in object categorization, in recent years, a number of important challenges remain, mainly, ability to learn from limited labeled data and ability to recognize object classes within large, potentially open, set of labels. Zero-shot learning is one way of addressing these challenges, but it has only been shown to work with limited sized class vocabularies and typically requires separation between supervised and unsupervised classes, allowing former to inform the latter but not vice versa. We propose the notion of semi-supervised vocabulary-informed learning to alleviate the above mentioned challenges and address problems of supervised, zero-shot and open set recognition using a unified framework. Specifically, we propose a maximum margin framework for semantic manifold-based recognition that incorporates distance constraints from (both supervised and unsupervised) vocabulary atoms, ensuring that labeled samples are projected closest to their correct prototypes, in the embedding space, than to others. We show that resulting model shows improvements in supervised, zero-shot, and large open set recognition, with up to 310K class vocabulary on AwA and ImageNet datasets.",
"title": ""
},
{
"docid": "48703205408e6ebd8f8fc357560acc41",
"text": "Two experiments found that when asked to perform the physically exerting tasks of clapping and shouting, people exhibit a sizable decrease in individual effort when performing in groups as compared to when they perform alone. This decrease, which we call social loafing, is in addition to losses due to faulty coordination of group efforts. Social loafing is discussed in terms of its experimental generality and theoretical importance. The widespread occurrence, the negative consequences for society, and some conditions that can minimize social loafing are also explored.",
"title": ""
},
{
"docid": "8b3ab5df68f71ff4be4d3902c81e35be",
"text": "When learning to program, frustrating experiences contribute to negative learning outcomes and poor retention in the field. Defining a common framework that explains why these experiences occur can lead to better interventions and learning mechanisms. To begin constructing such a framework, we asked 45 software developers about the severity of their frustration and to recall their most recent frustrating programming experience. As a result, 67% considered their frustration to be severe. Further, we distilled the reported experiences into 11 categories, which include issues with mapping behaviors to code and broken programming tools. Finally, we discuss future directions for defining our framework and designing future interventions.",
"title": ""
},
{
"docid": "05ea7a05b620c0dc0a0275f55becfbc3",
"text": "Automated story generation is the problem of automatically selecting a sequence of events, actions, or words that can be told as a story. We seek to develop a system that can generate stories by learning everything it needs to know from textual story corpora. To date, recurrent neural networks that learn language models at character, word, or sentence levels have had little success generating coherent stories. We explore the question of event representations that provide a midlevel of abstraction between words and sentences in order to retain the semantic information of the original data while minimizing event sparsity. We present a technique for preprocessing textual story data into event sequences. We then present a technique for automated story generation whereby we decompose the problem into the generation of successive events (event2event) and the generation of natural language sentences from events (event2sentence). We give empirical results comparing different event representations and their effects on event successor generation and the translation of events to natural language.",
"title": ""
},
{
"docid": "a81e4507632505b64f4839a1a23fa440",
"text": "Unity am e Deelopm nt w ith C# Alan Thorn In Pro Unity Game Development with C#, Alan Thorn, author of Learn Unity for 2D` Game Development and experienced game developer, takes you through the complete C# workflow for developing a cross-platform first person shooter in Unity. C# is the most popular programming language for experienced Unity developers, helping them get the most out of what Unity offers. If you’re already using C# with Unity and you want to take the next step in becoming an experienced, professional-level game developer, this is the book you need. Whether you are a student, an indie developer, or a seasoned game dev professional, you’ll find helpful C# examples of how to build intelligent enemies, create event systems and GUIs, develop save-game states, and lots more. You’ll understand and apply powerful programming concepts such as singleton classes, component based design, resolution independence, delegates, and event driven programming.",
"title": ""
},
{
"docid": "c6a7c67fa77d2a5341b8e01c04677058",
"text": "Human brain imaging studies have shown that greater amygdala activation to emotional relative to neutral events leads to enhanced episodic memory. Other studies have shown that fearful faces also elicit greater amygdala activation relative to neutral faces. To the extent that amygdala recruitment is sufficient to enhance recollection, these separate lines of evidence predict that recognition memory should be greater for fearful relative to neutral faces. Experiment 1 demonstrated enhanced memory for emotionally negative relative to neutral scenes; however, fearful faces were not subject to enhanced recognition across a variety of delays (15 min to 2 wk). Experiment 2 demonstrated that enhanced delayed recognition for emotional scenes was associated with increased sympathetic autonomic arousal, indexed by the galvanic skin response, relative to fearful faces. These results suggest that while amygdala activation may be necessary, it alone is insufficient to enhance episodic memory formation. It is proposed that a sufficient level of systemic arousal is required to alter memory consolidation resulting in enhanced recollection of emotional events.",
"title": ""
},
{
"docid": "0f20cfce49eaa9f447fc45b1d4c04be0",
"text": "Face recognition is a widely used technology with numerous large-scale applications, such as surveillance, social media and law enforcement. There has been tremendous progress in face recognition accuracy over the past few decades, much of which can be attributed to deep learning based approaches during the last five years. Indeed, automated face recognition systems are now believed to surpass human performance in some scenarios. Despite this progress, a crucial question still remains unanswered: given a face representation, how many identities can it resolve? In other words, what is the capacity of the face representation? A scientific basis for estimating the capacity of a given face representation will not only benefit the evaluation and comparison of different face representation methods, but will also establish an upper bound on the scalability of an automatic face recognition system. We cast the face capacity estimation problem under the information theoretic framework of capacity of a Gaussian noise channel. By explicitly accounting for two sources of representational noise: epistemic (model) uncertainty and aleatoric (data) variability, our approach is able to estimate the capacity of any given face representation. To demonstrate the efficacy of our approach, we estimate the capacity of a 128-dimensional deep neural network based face representation, FaceNet [1], and that of the classical Eigenfaces [2] representation of the same dimensionality. Our numerical experiments on unconstrained faces indicate that, (a) our capacity estimation model yields a capacity upper bound of 5.8×108 for FaceNet and 1×100 for Eigenface representation at a false acceptance rate (FAR) of 1%, (b) the capacity of the face representation reduces drastically as you lower the desired FAR (for FaceNet representation; the capacity at FAR of 0.1% and 0.001% is 2.4×106 and 7.0×102, respectively), and (c) the empirical performance of the FaceNet representation is significantly below the theoretical limit.",
"title": ""
},
{
"docid": "152122f523efc9150033dbf5798c650f",
"text": "Nowadays, computer systems are presented in almost all types of human activity and they support any kind of industry as well. Most of these systems are distributed where the communication between nodes is based on computer networks of any kind. Connectivity between system components is the key issue when designing distributed systems, especially systems of industrial informatics. The industrial area requires a wide range of computer communication means, particularly time-constrained and safety-enhancing ones. From fieldbus and industrial Ethernet technologies through wireless and internet-working solutions to standardization issues, there are many aspects of computer networks uses and many interesting research domains. Lots of them are quite sophisticated or even unique. The main goal of this paper is to present the survey of the latest trends in the communication domain of industrial distributed systems and to emphasize important questions as dependability, and standardization. Finally, the general assessment and estimation of the future development is provided. The presentation is based on the abstract description of dataflow within a system.",
"title": ""
},
{
"docid": "90c2121fc04c0c8d9c4e3d8ee7b8ecc0",
"text": "Measuring similarity between two data objects is a more challenging problem for data mining and knowledge discovery tasks. The traditional clustering algorithms have been mainly stressed on numerical data, the implicit property of which can be exploited to define distance function between the data points to define similarity measure. The problem of similarity becomes more complex when the data is categorical which do not have a natural ordering of values or can be called as non geometrical attributes. Clustering on relational data sets when majority of its attributes are of categorical types makes interesting facts. No earlier work has been done on clustering categorical attributes of relational data set types making use of the property of functional dependency as parameter to measure similarity. This paper is an extension of earlier work on clustering relational data sets where domains are unique and similarity is context based and introduces a new notion of similarity based on dependency of an attribute on other attributes prevalent in the relational data set. This paper also gives a brief overview of popular similarity measures of categorical attributes. This novel similarity measure can be used to apply on tuples and their respective values. The important property of categorical domain is that they have smaller number of attribute values. The similarity measure of relational data sets then can be applied to the smaller data sets for efficient results.",
"title": ""
},
{
"docid": "28e1c4c2622353fc87d3d8a971b9e874",
"text": "In-memory key/value store (KV-store) is a key building block for many systems like databases and large websites. Two key requirements for such systems are efficiency and availability, which demand a KV-store to continuously handle millions of requests per second. A common approach to availability is using replication, such as primary-backup (PBR), which, however, requires M+1 times memory to tolerate M failures. This renders scarce memory unable to handle useful user jobs.\n This article makes the first case of building highly available in-memory KV-store by integrating erasure coding to achieve memory efficiency, while not notably degrading performance. A main challenge is that an in-memory KV-store has much scattered metadata. A single KV put may cause excessive coding operations and parity updates due to excessive small updates to metadata. Our approach, namely Cocytus, addresses this challenge by using a hybrid scheme that leverages PBR for small-sized and scattered data (e.g., metadata and key), while only applying erasure coding to relatively large data (e.g., value). To mitigate well-known issues like lengthy recovery of erasure coding, Cocytus uses an online recovery scheme by leveraging the replicated metadata information to continuously serve KV requests. To further demonstrate the usefulness of Cocytus, we have built a transaction layer by using Cocytus as a fast and reliable storage layer to store database records and transaction logs. We have integrated the design of Cocytus to Memcached and extend it to support in-memory transactions. Evaluation using YCSB with different KV configurations shows that Cocytus incurs low overhead for latency and throughput, can tolerate node failures with fast online recovery, while saving 33% to 46% memory compared to PBR when tolerating two failures. A further evaluation using the SmallBank OLTP benchmark shows that in-memory transactions can run atop Cocytus with high throughput, low latency, and low abort rate and recover fast from consecutive failures.",
"title": ""
},
{
"docid": "fee1419f689259bc5fe7e4bfd8f0242c",
"text": "One of the challenges in computer vision is how to learn an accurate classifier for a new domain by using labeled images from an old domain under the condition that there is no available labeled images in the new domain. Domain adaptation is an outstanding solution that tackles this challenge by employing available source-labeled datasets, even with significant difference in distribution and properties. However, most prior methods only reduce the difference in subspace marginal or conditional distributions across domains while completely ignoring the source data label dependence information in a subspace. In this paper, we put forward a novel domain adaptation approach, referred to as Enhanced Subspace Distribution Matching. Specifically, it aims to jointly match the marginal and conditional distributions in a kernel principal dimensionality reduction procedure while maximizing the source label dependence in a subspace, thus raising the subspace distribution matching degree. Extensive experiments verify that it can significantly outperform several state-of-the-art methods for cross-domain image classification problems.",
"title": ""
},
{
"docid": "2d6ea84dcdae28291c5fdca01495d51f",
"text": "This paper presents how to generate questions from given passages using neural networks, where large scale QA pairs are automatically crawled and processed from Community-QA website, and used as training data. The contribution of the paper is 2-fold: First, two types of question generation approaches are proposed, one is a retrieval-based method using convolution neural network (CNN), the other is a generation-based method using recurrent neural network (RNN); Second, we show how to leverage the generated questions to improve existing question answering systems. We evaluate our question generation method for the answer sentence selection task on three benchmark datasets, including SQuAD, MS MARCO, and WikiQA. Experimental results show that, by using generated questions as an extra signal, significant QA improvement can be achieved.",
"title": ""
},
{
"docid": "0a35370e6c99e122b8051a977029d77a",
"text": "To truly understand the visual world our models should be able not only to recognize images but also generate them. To this end, there has been exciting recent progress on generating images from natural language descriptions. These methods give stunning results on limited domains such as descriptions of birds or flowers, but struggle to faithfully reproduce complex sentences with many objects and relationships. To overcome this limitation we propose a method for generating images from scene graphs, enabling explicitly reasoning about objects and their relationships. Our model uses graph convolution to process input graphs, computes a scene layout by predicting bounding boxes and segmentation masks for objects, and converts the layout to an image with a cascaded refinement network. The network is trained adversarially against a pair of discriminators to ensure realistic outputs. We validate our approach on Visual Genome and COCO-Stuff, where qualitative results, ablations, and user studies demonstrate our method's ability to generate complex images with multiple objects.",
"title": ""
},
{
"docid": "a30de4a213fe05c606fb16d204b9b170",
"text": "– The recent work on cross-country regressions can be compared to looking at “a black cat in a dark room”. Whether or not all this work has accomplished anything on the substantive economic issues is a moot question. But the search for “a black cat ” has led to some progress on the econometric front. The purpose of this paper is to comment on this progress. We discuss the problems with the use of cross-country panel data in the context of two problems: The analysis of economic growth and that of the purchasing power parity (PPP) theory. A propos de l’emploi des méthodes de panel sur des données inter-pays RÉSUMÉ. – Les travaux récents utilisant des régressions inter-pays peuvent être comparés à la recherche d'« un chat noir dans une pièce sans lumière ». La question de savoir si ces travaux ont apporté quelque chose de significatif à la connaissance économique est assez controversée. Mais la recherche du « chat noir » a conduit à quelques progrès en économétrie. L'objet de cet article est de discuter de ces progrès. Les problèmes posés par l'utilisation de panels de pays sont discutés dans deux contextes : celui de la croissance économique et de la convergence d'une part ; celui de la théorie de la parité des pouvoirs d'achat d'autre part. * G.S. MADDALA: Department of Economics, The Ohio State University. I would like to thank M. NERLOVE, P. SEVESTRE and an anonymous referee for helpful comments. Responsability for the omissions and any errors is my own. ANNALES D’ÉCONOMIE ET DE STATISTIQUE. – N° 55-56 – 1999 « The Gods love the obscure and hate the obvious » BRIHADARANYAKA UPANISHAD",
"title": ""
}
] | scidocsrr |
94fbd608b3c21fe9a47e5c6c42ad18ad | Recorded Behavior as a Valuable Resource for Diagnostics in Mobile Phone Addiction: Evidence from Psychoinformatics | [
{
"docid": "2acbfab9d69f3615930c1960a2e6dda9",
"text": "OBJECTIVE\nThe aim of this study was to develop a self-diagnostic scale that could distinguish smartphone addicts based on the Korean self-diagnostic program for Internet addiction (K-scale) and the smartphone's own features. In addition, the reliability and validity of the smartphone addiction scale (SAS) was demonstrated.\n\n\nMETHODS\nA total of 197 participants were selected from Nov. 2011 to Jan. 2012 to accomplish a set of questionnaires, including SAS, K-scale, modified Kimberly Young Internet addiction test (Y-scale), visual analogue scale (VAS), and substance dependence and abuse diagnosis of DSM-IV. There were 64 males and 133 females, with ages ranging from 18 to 53 years (M = 26.06; SD = 5.96). Factor analysis, internal-consistency test, t-test, ANOVA, and correlation analysis were conducted to verify the reliability and validity of SAS.\n\n\nRESULTS\nBased on the factor analysis results, the subscale \"disturbance of reality testing\" was removed, and six factors were left. The internal consistency and concurrent validity of SAS were verified (Cronbach's alpha = 0.967). SAS and its subscales were significantly correlated with K-scale and Y-scale. The VAS of each factor also showed a significant correlation with each subscale. In addition, differences were found in the job (p<0.05), education (p<0.05), and self-reported smartphone addiction scores (p<0.001) in SAS.\n\n\nCONCLUSIONS\nThis study developed the first scale of the smartphone addiction aspect of the diagnostic manual. This scale was proven to be relatively reliable and valid.",
"title": ""
},
{
"docid": "2fe2f83fa9a0dca9f01fd9e5e80ca515",
"text": "For the first time in history, it is possible to study human behavior on great scale and in fine detail simultaneously. Online services and ubiquitous computational devices, such as smartphones and modern cars, record our everyday activity. The resulting Big Data offers unprecedented opportunities for tracking and analyzing behavior. This paper hypothesizes the applicability and impact of Big Data technologies in the context of psychometrics both for research and clinical applications. It first outlines the state of the art, including the severe shortcomings with respect to quality and quantity of the resulting data. It then presents a technological vision, comprised of (i) numerous data sources such as mobile devices and sensors, (ii) a central data store, and (iii) an analytical platform, employing techniques from data mining and machine learning. To further illustrate the dramatic benefits of the proposed methodologies, the paper then outlines two current projects, logging and analyzing smartphone usage. One such study attempts to thereby quantify severity of major depression dynamically; the other investigates (mobile) Internet Addiction. Finally, the paper addresses some of the ethical issues inherent to Big Data technologies. In summary, the proposed approach is about to induce the single biggest methodological shift since the beginning of psychology or psychiatry. The resulting range of applications will dramatically shape the daily routines of researches and medical practitioners alike. Indeed, transferring techniques from computer science to psychiatry and psychology is about to establish Psycho-Informatics, an entire research direction of its own.",
"title": ""
}
] | [
{
"docid": "192c4c695f543e79f0d3e41f5920f637",
"text": "A boosted convolutional neural network (BCNN) system is proposed to enhance the pedestrian detection performance in this work. Being inspired by the classic boosting idea, we develop a weighted loss function that emphasizes challenging samples in training a convolutional neural network (CNN). Two types of samples are considered challenging: 1) samples with detection scores falling in the decision boundary, and 2) temporally associated samples with inconsistent scores. A weighting scheme is designed for each of them. Finally, we train a boosted fusion layer to benefit from the integration of these two weighting schemes. We use the Fast-RCNN as the baseline, and test the corresponding BCNN on the Caltech pedestrian dataset in the experiment, and show a significant performance gain of the BCNN over its baseline.",
"title": ""
},
{
"docid": "f8275a80021312a58c9cd52bbcd4c431",
"text": "Mobile online social networks (OSNs) are emerging as the popular mainstream platform for information and content sharing among people. In order to provide Quality of Experience (QoE) support for mobile OSN services, in this paper we propose a socially-driven learning-based framework, namely Spice, for media content prefetching to reduce the access delay and enhance mobile user's satisfaction. Through a large-scale data-driven analysis over real-life mobile Twitter traces from over 17,000 users during a period of five months, we reveal that the social friendship has a great impact on user's media content click behavior. To capture this effect, we conduct social friendship clustering over the set of user's friends, and then develop a cluster-based Latent Bias Model for socially-driven learning-based prefetching prediction. We then propose a usage-adaptive prefetching scheduling scheme by taking into account that different users may possess heterogeneous patterns in the mobile OSN app usage. We comprehensively evaluate the performance of Spice framework using trace-driven emulations on smartphones. Evaluation results corroborate that the Spice can achieve superior performance, with an average 67.2% access delay reduction at the low cost of cellular data and energy consumption. Furthermore, by enabling users to offload their machine learning procedures to a cloud server, our design can achieve speed-up of a factor of 1000 over the local data training execution on smartphones.",
"title": ""
},
{
"docid": "a6b4ee8a6da7ba240b7365cf1a70669d",
"text": "Received: 2013-04-15 Accepted: 2013-05-13 Accepted after one revision by Prof. Dr. Sinz. Published online: 2013-06-14 This article is also available in German in print and via http://www. wirtschaftsinformatik.de: Blohm I, Leimeister JM (2013) Gamification. Gestaltung IT-basierter Zusatzdienstleistungen zur Motivationsunterstützung und Verhaltensänderung. WIRTSCHAFTSINFORMATIK. doi: 10.1007/s11576-013-0368-0.",
"title": ""
},
{
"docid": "f92f0a3d46eaf14e478a41f87b8ad369",
"text": "The agricultural productivity of India is gradually declining due to destruction of crops by various natural calamities and the crop rotation process being affected by irregular climate patterns. Also, the interest and efforts put by farmers lessen as they grow old which forces them to sell their agricultural lands, which automatically affects the production of agricultural crops and dairy products. This paper mainly focuses on the ways by which we can protect the crops during an unavoidable natural disaster and implement technology induced smart agro-environment, which can help the farmer manage large fields with less effort. Three common issues faced during agricultural practice are shearing furrows in case of excess rain or flood, manual watering of plants and security against animal grazing. This paper provides a solution for these problems by helping farmer monitor and control various activities through his mobile via GSM and DTMF technology in which data is transmitted from various sensors placed in the agricultural field to the controller and the status of the agricultural parameters are notified to the farmer using which he can take decisions accordingly. The main advantage of this system is that it is semi-automated i.e. the decision is made by the farmer instead of fully automated decision that results in precision agriculture. It also overcomes the existing traditional practices that require high money investment, energy, labour and time.",
"title": ""
},
{
"docid": "0b51889817aca2afd7c1c754aa47f7de",
"text": "OBJECTIVE\nThis study aims to compare how national guidelines approach the management of obesity in reproductive age women.\n\n\nSTUDY DESIGN\nWe conducted a search for national guidelines in the English language on the topic of obesity surrounding the time of a pregnancy. We identified six primary source documents and several secondary source documents from five countries. Each document was then reviewed to identify: (1) statements acknowledging increased health risks related to obesity and reproductive outcomes, (2) recommendations for the management of obesity before, during, or after pregnancy.\n\n\nRESULTS\nAll guidelines cited an increased risk for miscarriage, birth defects, gestational diabetes, hypertension, fetal growth abnormalities, cesarean sections, difficulty with anesthesia, postpartum hemorrhage, and obesity in offspring. Counseling on the risks of obesity and weight loss before pregnancy were universal recommendations. There were substantial differences in the recommendations pertaining to gestational weight gain goals, nutrient and vitamin supplements, screening for gestational diabetes, and thromboprophylaxis among the guidelines.\n\n\nCONCLUSION\nStronger evidence from randomized trials is needed to devise consistent recommendations for obese reproductive age women. This research may also assist clinicians in overcoming one of the many obstacles they encounter when providing care to obese women.",
"title": ""
},
{
"docid": "625177f221163e38ecf91b884cf4bcd2",
"text": "Equivalent time oscilloscopes are widely used as an alternative to real-time oscilloscopes when high timing resolution is needed. For their correct operation, they need the trigger signal to be accurately aligned to the incoming data, which is achieved by the use of a clock and data recovery circuit (CDR). In this paper, a new multilevel bang-bang phase detector (BBPD) for CDRs is presented; the proposed phase detection scheme disregards samples taken close to the data transitions for the calculation of the phase difference between the inputs, thus eliminating metastability, one of the main issues hindering the performance of BBPDs.",
"title": ""
},
{
"docid": "bb72e4d6f967fb88473756cdcbb04252",
"text": "GF (Grammatical Framework) is a grammar formalism based on the distinction between abstract and concrete syntax. An abstract syntax is a free algebra of trees, and a concrete syntax is a mapping from trees to nested records of strings and features. These mappings are naturally defined as functions in a functional programming language; the GF language provides the customary functional programming constructs such as algebraic data types, pattern matching, and higher-order functions, which enable productive grammar writing and linguistic generalizations. Given the seemingly transformational power of the GF language, its computational properties are not obvious. However, all grammars written in GF can be compiled into a simple and austere core language, Canonical GF (CGF). CGF is well suited for implementing parsing and generation with grammars, as well as for proving properties of GF. This paper gives a concise description of both the core and the source language, the algorithm used in compiling GF to CGF, and some back-end optimizations on CGF.",
"title": ""
},
{
"docid": "667bca62dd6a9e755b4bae25e2670bb8",
"text": "This paper presents a Phantom Go program. It is based on a MonteCarlo approach. The program plays Phantom Go at an intermediate level.",
"title": ""
},
{
"docid": "c3eca8a83161a19c77406dc6393aa5b0",
"text": "Cell division in eukaryotes requires extensive architectural changes of the nuclear envelope (NE) to ensure that segregated DNA is finally enclosed in a single cell nucleus in each daughter cell. Higher eukaryotic cells have evolved 'open' mitosis, the most extreme mechanism to solve the problem of nuclear division, in which the NE is initially completely disassembled and then reassembled in coordination with DNA segregation. Recent progress in the field has now started to uncover mechanistic and molecular details that underlie the changes in NE reorganization during open mitosis. These studies reveal a tight interplay between NE components and the mitotic machinery.",
"title": ""
},
{
"docid": "dc71b53847d33e82c53f0b288da89bfa",
"text": "We explore the use of convolutional neural networks for the semantic classification of remote sensing scenes. Two recently proposed architectures, CaffeNet and GoogLeNet, are adopted, with three different learning modalities. Besides conventional training from scratch, we resort to pre-trained networks that are only fine-tuned on the target data, so as to avoid overfitting problems and reduce design time. Experiments on two remote sensing datasets, with markedly different characteristics, testify on the effectiveness and wide applicability of the proposed solution, which guarantees a significant performance improvement over all state-of-the-art references.",
"title": ""
},
{
"docid": "72c4c247c1314ebcbbec4f43becd46f0",
"text": "The evolutionary origin of the eukaryotic cell represents an enigmatic, yet largely incomplete, puzzle. Several mutually incompatible scenarios have been proposed to explain how the eukaryotic domain of life could have emerged. To date, convincing evidence for these scenarios in the form of intermediate stages of the proposed eukaryogenesis trajectories is lacking, presenting the emergence of the complex features of the eukaryotic cell as an evolutionary deus ex machina. However, recent advances in the field of phylogenomics have started to lend support for a model that places a cellular fusion event at the basis of the origin of eukaryotes (symbiogenesis), involving the merger of an as yet unknown archaeal lineage that most probably belongs to the recently proposed 'TACK superphylum' (comprising Thaumarchaeota, Aigarchaeota, Crenarchaeota and Korarchaeota) with an alphaproteobacterium (the protomitochondrion). Interestingly, an increasing number of so-called ESPs (eukaryotic signature proteins) is being discovered in recently sequenced archaeal genomes, indicating that the archaeal ancestor of the eukaryotic cell might have been more eukaryotic in nature than presumed previously, and might, for example, have comprised primitive phagocytotic capabilities. In the present paper, we review the evolutionary transition from archaeon to eukaryote, and propose a new model for the emergence of the eukaryotic cell, the 'PhAT (phagocytosing archaeon theory)', which explains the emergence of the cellular and genomic features of eukaryotes in the light of a transiently complex phagocytosing archaeon.",
"title": ""
},
{
"docid": "a14afa0d14a0fcfb890c8f2944750230",
"text": "RNA turnover is an integral part of cellular RNA homeostasis and gene expression regulation. Whereas the cytoplasmic control of protein-coding mRNA is often the focus of study, we discuss here the less appreciated role of nuclear RNA decay systems in controlling RNA polymerase II (RNAPII)-derived transcripts. Historically, nuclear RNA degradation was found to be essential for the functionalization of transcripts through their proper maturation. Later, it was discovered to also be an important caretaker of nuclear hygiene by removing aberrant and unwanted transcripts. Recent years have now seen a set of new protein complexes handling a variety of new substrates, revealing functions beyond RNA processing and the decay of non-functional transcripts. This includes an active contribution of nuclear RNA metabolism to the overall cellular control of RNA levels, with mechanistic implications during cellular transitions. RNA is controlled at various stages of transcription and processing to achieve appropriate gene regulation. Whereas much research has focused on the cytoplasmic control of RNA levels, this Review discusses our emerging appreciation of the importance of nuclear RNA regulation, including the molecular machinery involved in nuclear RNA decay, how functional RNAs bypass degradation and roles for nuclear RNA decay in physiology and disease.",
"title": ""
},
{
"docid": "90acdc98c332de55e790d20d48dfde5e",
"text": "PURPOSE AND DESIGN\nSnack and Relax® (S&R), a program providing healthy snacks and holistic relaxation modalities to hospital employees, was evaluated for immediate impact. A cross-sectional survey was then conducted to assess the professional quality of life (ProQOL) in registered nurses (RNs); compare S&R participants/nonparticipants on compassion satisfaction (CS), burnout, and secondary traumatic stress (STS); and identify situations in which RNs experienced compassion fatigue or burnout and the strategies used to address these situations.\n\n\nMETHOD\nPre- and post vital signs and self-reported stress were obtained from S&R attendees (N = 210). RNs completed the ProQOL Scale measuring CS, burnout, and STS (N = 158).\n\n\nFINDINGS\nSignificant decreases in self-reported stress, respirations, and heart rate were found immediately after S&R. Low CS was noted in 28.5% of participants, 25.3% had high burnout, and 23.4% had high STS. S&R participants and nonparticipants did not differ on any of the ProQOL scales. Situations in which participants experienced compassion fatigue/burnout were categorized as patient-related, work-related, and personal/family-related. Strategies to address these situations were holistic and stress reducing.\n\n\nCONCLUSION\nProviding holistic interventions such as S&R for nurses in the workplace may alleviate immediate feelings of stress and provide a moment of relaxation in the workday.",
"title": ""
},
{
"docid": "e131e4d4bb59b4d0b513cc7c5dd017f2",
"text": "Although touch is one of the most neglected modalities of communication, several lines of research bear on the important communicative functions served by the modality. The authors highlighted the importance of touch by reviewing and synthesizing the literatures pertaining to the communicative functions served by touch among humans, nonhuman primates, and rats. In humans, the authors focused on the role that touch plays in emotional communication, attachment, bonding, compliance, power, intimacy, hedonics, and liking. In nonhuman primates, the authors examined the relations among touch and status, stress, reconciliation, sexual relations, and attachment. In rats, the authors focused on the role that touch plays in emotion, learning and memory, novelty seeking, stress, and attachment. The authors also highlighted the potential phylogenetic and ontogenetic continuities and discussed suggestions for future research.",
"title": ""
},
{
"docid": "a37493c6cde320091c1baf7eaa57b982",
"text": "The pervasiveness of cell phones and mobile social media applications is generating vast amounts of geolocalized user-generated content. Since the addition of geotagging information, Twitter has become a valuable source for the study of human dynamics. Its analysis is shedding new light not only on understanding human behavior but also on modeling the way people live and interact in their urban environments. In this paper, we evaluate the use of geolocated tweets as a complementary source of information for urban planning applications. Our contributions are focussed in two urban planing areas: (1) a technique to automatically determine land uses in a specific urban area based on tweeting patterns, and (2) a technique to automatically identify urban points of interest as places with high activity of tweets. We apply our techniques in Manhattan (NYC) using 49 days of geolocated tweets and validate them using land use and landmark information provided by various NYC departments. Our results indicate that geolocated tweets are a powerful and dynamic data source to characterize urban environments.",
"title": ""
},
{
"docid": "124cc672103959685cdcb3e98ae33d93",
"text": "With the rise of social media and advancements in AI technology, human-bot interaction will soon be commonplace. In this paper we explore human-bot interaction in STACK OVERFLOW, a question and answer website for developers. For this purpose, we built a bot emulating an ordinary user answering questions concerning the resolution of git error messages. In a first run this bot impersonated a human, while in a second run the same bot revealed its machine identity. Despite being functionally identical, the two bot variants elicited quite different reactions.",
"title": ""
},
{
"docid": "436900539406faa9ff34c1af12b6348d",
"text": "The accomplishments to date on the development of automatic vehicle control (AVC) technology in the Program on Advanced Technology for the Highway (PATH) at the University of California, Berkeley, are summarized. The basic prqfiiples and assumptions underlying the PATH work are identified, ‘followed by explanations of the work on automating vehicle lateral (steering) and longitudinal (spacing and speed) control. For both lateral and longitudinal control, the modeling of plant dynamics is described first, followed by development of the additional subsystems needed (communications, reference/sensor systems) and the derivation of the control laws. Plans for testing on vehicles in both near and long term are then discussed.",
"title": ""
},
{
"docid": "61c4146ac8b55167746d3f2b9c8b64e8",
"text": "In a variety of Network-based Intrusion Detection System (NIDS) applications, one desires to detect groups of unknown attack (e.g., botnet) packet-flows, with a group potentially manifesting its atypicality (relative to a known reference “normal”/null model) on a low-dimensional subset of the full measured set of features used by the IDS. What makes this anomaly detection problem quite challenging is that it is a priori unknown which (possibly sparse) subset of features jointly characterizes a particular application, especially one that has not been seen before, which thus represents an unknown behavioral class (zero-day threat). Moreover, nowadays botnets have become evasive, evolving their behavior to avoid signature-based IDSes. In this work, we apply a novel active learning (AL) framework for botnet detection, facilitating detection of unknown botnets (assuming no ground truth examples of same). We propose a new anomaly-based feature set that captures the informative features and exploits the sequence of packet directions in a given flow. Experiments on real world network traffic data, including several common Zeus botnet instances, demonstrate the advantage of our proposed features and AL system.",
"title": ""
},
{
"docid": "8dab17013cd7753706d818b492a5eb15",
"text": "The paper analyses potentials, challenges and problems of the rural tourism from the point of view of its impact on sustainable rural development. It explores alternative sources of income for rural people by means of tourism and investigates effects of the rural tourism on agricultural production in local rural communities. The aim is to identify the existing and potential tourist attractions within the rural areas in Southern Russia and to provide solutions to be introduced in particular rural settlements in order to make them attractive for tourists. The paper includes the elaboration and testing of a methodology for evaluating the rural tourism potentials using the case of rural settlements of Stavropol Krai, Russia. The paper concludes with a ranking of the selected rural settlements according to their rural tourist capacity and substantiation of the tourism models to be implemented to ensure a sustainable development of the considered rural areas.",
"title": ""
}
] | scidocsrr |
98463290f3e6afe821617921e80fba92 | A Systematic Review of the Use of Blockchain in Healthcare | [
{
"docid": "d01339e077c9d8300b4616e7c713f48e",
"text": "Blockchains as a technology emerged to facilitate money exchange transactions and eliminate the need for a trusted third party to notarize and verify such transactions as well as protect data security and privacy. New structures of Blockchains have been designed to accommodate the need for this technology in other fields such as e-health, tourism and energy. This paper is concerned with the use of Blockchains in managing and sharing electronic health and medical records to allow patients, hospitals, clinics, and other medical stakeholder to share data amongst themselves, and increase interoperability. The selection of the Blockchains used architecture depends on the entities participating in the constructed chain network. Although the use of Blockchains may reduce redundancy and provide caregivers with consistent records about their patients, it still comes with few challenges which could infringe patients' privacy, or potentially compromise the whole network of stakeholders. In this paper, we investigate different Blockchains structures, look at existing challenges and provide possible solutions. We focus on challenges that may expose patients' privacy and the resiliency of Blockchains to possible attacks.",
"title": ""
},
{
"docid": "91c4a82cfcf69c75352d569a883ea0d3",
"text": "Permissionless blockchain-based cryptocurrencies commonly use proof-of-work (PoW) or proof-of-stake (PoS) to ensure their security, e.g. to prevent double spending attacks. However, both approaches have disadvantages: PoW leads to massive amounts of wasted electricity and re-centralization, whereas major stakeholders in PoS might be able to create a monopoly. In this work, we propose proof-of-personhood (PoP), a mechanism that binds physical entities to virtual identities in a way that enables accountability while preserving anonymity. Afterwards we introduce PoPCoin, a new cryptocurrency, whose consensus mechanism leverages PoP to eliminate the dis-advantages of PoW and PoS while ensuring security. PoPCoin leads to a continuously fair and democratic wealth creation process which paves the way for an experimental basic income infrastructure.",
"title": ""
}
] | [
{
"docid": "22255906a7f1d30c9600728a6dc9ad9f",
"text": "The next major step in the evolution of LTE targets the rapidly increasing demand for mobile broadband services and traffic volumes. One of the key technologies is a new carrier type, referred to in this article as a Lean Carrier, an LTE carrier with minimized control channel overhead and cell-specific reference signals. The Lean Carrier can enhance spectral efficiency, increase spectrum flexibility, and reduce energy consumption. This article provides an overview of the motivations and main use cases of the Lean Carrier. Technical challenges are highlighted, and design options are discussed; finally, a performance evaluation quantifies the benefits of the Lean Carrier.",
"title": ""
},
{
"docid": "8dee3ada764a40fce6b5676287496ccd",
"text": "We study the problem of video-to-video synthesis, whose goal is to learn a mapping function from an input source video (e.g., a sequence of semantic segmentation masks) to an output photorealistic video that precisely depicts the content of the source video. While its image counterpart, the image-to-image translation problem, is a popular topic, the video-to-video synthesis problem is less explored in the literature. Without modeling temporal dynamics, directly applying existing image synthesis approaches to an input video often results in temporally incoherent videos of low visual quality. In this paper, we propose a video-to-video synthesis approach under the generative adversarial learning framework. Through carefully-designed generators and discriminators, coupled with a spatio-temporal adversarial objective, we achieve high-resolution, photorealistic, temporally coherent video results on a diverse set of input formats including segmentation masks, sketches, and poses. Experiments on multiple benchmarks show the advantage of our method compared to strong baselines. In particular, our model is capable of synthesizing 2K resolution videos of street scenes up to 30 seconds long, which significantly advances the state-of-the-art of video synthesis. Finally, we apply our method to future video prediction, outperforming several competing systems. Code, models, and more results are available at our website.",
"title": ""
},
{
"docid": "44cf5669d05a759ab21b3ebc1f6c340d",
"text": "Linear variable differential transformer (LVDT) sensors are widely used in hydraulic and pneumatic mechatronic systems for measuring physical quantities like displacement, force or pressure. The LVDT sensor consists of two magnetic coupled coils with a common core and this sensor converts the displacement of core into reluctance variation of magnetic circuit. LVDT sensors combines good accuracy (0.1 % error) with low cost, but they require relative complex electronics. Standard electronics for LVDT sensor conditioning is analog $the coupled coils constitute an inductive half-bridge supplied with 5 kHz sinus excitation from a quadrate oscillator. The output phase span is amplified and synchronous demodulated. This analog technology works well but has its drawbacks - hard to adjust, many components and packages, no connection to computer systems. To eliminate all these disadvantages, our team from \"Politehnica\" University of Bucharest has developed a LVDT signal conditioner using system on chip microcontroller MSP430F149 from Texas Instruments. This device integrates all peripherals required for LVDT signal conditioning (pulse width modulation modules, analog to digital converter, timers, enough memory resources and processing power) and offers also excellent low-power options. Resulting electronic module is a one-chip solution made entirely in SMD technology and its small dimensions allow its integration into sensor's body. Present paper focuses on specific issues of this digital solution for LVDT conditioning and compares it with classic analog solution from different points of view: error curve, power consumption, communication options, dimensions and production cost. Microcontroller software (firmware) and digital signal conditioning techniques for LVDT are also analyzed. Use of system on chip devices for signal conditioning allows realization of low cost compact transducers with same or better performances than their analog counterparts, but with extra options like serial communication channels, self-calibration, local storage of measured values and fault detection",
"title": ""
},
{
"docid": "467ff4b60acb874c0430ae4c20d62137",
"text": "The purpose of this paper is twofold. First, we give a survey of the known methods of constructing lattices in complex hyperbolic space. Secondly, we discuss some of the lattices constructed by Deligne and Mostow and by Thurston in detail. In particular, we give a unified treatment of the constructions of fundamental domains and we relate this to other properties of these lattices.",
"title": ""
},
{
"docid": "b8bcd83f033587533d7502c54a2b67da",
"text": "The development of structural health monitoring (SHM) technology has evolved for over fifteen years in Hong Kong since the implementation of the “Wind And Structural Health Monitoring System (WASHMS)” on the suspension Tsing Ma Bridge in 1997. Five cable-supported bridges in Hong Kong, namely the Tsing Ma (suspension) Bridge, the Kap Shui Mun (cable-stayed) Bridge, the Ting Kau (cable-stayed) Bridge, the Western Corridor (cable-stayed) Bridge, and the Stonecutters (cable-stayed) Bridge, have been instrumented with sophisticated long-term SHM systems. These SHM systems mainly focus on the tracing of structural behavior and condition of the long-span bridges over their lifetime. Recently, a structural health monitoring and maintenance management system (SHM&MMS) has been designed and will be implemented on twenty-one sea-crossing viaduct bridges with a total length of 9,283 km in the Hong Kong Link Road (HKLR) of the Hong Kong – Zhuhai – Macao Bridge of which the construction commenced in mid-2012. The SHM&MMS gives more emphasis on durability monitoring of the reinforced concrete viaduct bridges in marine environment and integration of the SHM system and bridge maintenance management system. It is targeted to realize the transition from traditional corrective and preventive maintenance to condition-based maintenance (CBM) of in-service bridges. The CBM uses real-time and continuous monitoring data and monitoring-derived information on the condition of bridges (including structural performance and deterioration mechanisms) to identify when the actual maintenance is necessary and how cost-effective maintenance can be conducted. This paper outlines how to incorporate SHM technology into bridge maintenance strategy to realize CBM management of bridges.",
"title": ""
},
{
"docid": "3b1b829e6d017d574562e901f4963bc4",
"text": "Many problems in AI are simplified by clever representations of sensory or symbolic input. How to discover such representations automatically, from large amounts of unlabeled data, remains a fundamental challenge. The goal of statistical methods for dimensionality reduction is to detect and discover low dimensional structure in high dimensional data. In this paper, we review a recently proposed algorithm— maximum variance unfolding—for learning faithful low dimensional representations of high dimensional data. The algorithm relies on modern tools in convex optimization that are proving increasingly useful in many areas of machine learning.",
"title": ""
},
{
"docid": "aab5b2bb3061abc2405700a1001a464d",
"text": "Although social skills group interventions for children with autism are common in outpatient clinic settings, little research has been conducted to determine the efficacy of such treatments. This study examined the effectiveness of an outpatient clinic-based social skills group intervention with four high-functioning elementary-aged children with autism. The group was designed to teach specific social skills, including greeting, conversation, and play skills in a brief therapy format (eight sessions total). At the end of each skills-training session, children with autism were observed in play sessions with typical peers. Typical peers received peer education about ways to interact with children with autism. Results indicate that a social skills group implemented in an outpatient clinic setting was effective in improving greeting and play skills, with less clear improvements noted in conversation skills. In addition, children with autism reported increased feelings of social support from classmates at school following participation in the group. However, parent report data of greeting, conversation, and play skills outside of the clinic setting indicated significant improvements in only greeting skills. Thus, although the clinic-based intervention led to improvements in social skills, fewer changes were noted in the generalization to nonclinic settings.",
"title": ""
},
{
"docid": "1df3f59834420b108677e0a40e4cac63",
"text": "We extend classic review mining work by building a binary classifier that predicts whether a review of a documentary film was written by an expert or a layman with 90.70% accuracy (F1 score), and compare the characteristics of the predicted classes. A variety of standard lexical and syntactic features was used for this supervised learning task. Our results suggest that experts write comparatively lengthier and more detailed reviews that feature more complex grammar and a higher diversity in their vocabulary. Layman reviews are more subjective and contextualized in peoples’ everyday lives. Our error analysis shows that laymen are about twice as likely to be mistaken as experts than vice versa. We argue that the type of author might be a useful new feature for improving the accuracy of predicting the rating, helpfulness and authenticity of reviews. Finally, the outcomes of this work might help researchers and practitioners in the field of impact assessment to gain a more fine-grained understanding of the perception of different types of media consumers and reviewers of a topic, genre or information product.",
"title": ""
},
{
"docid": "a88c0d45ca7859c050e5e76379f171e6",
"text": "Cancer and other chronic diseases have constituted (and will do so at an increasing pace) a significant portion of healthcare costs in the United States in recent years. Although prior research has shown that diagnostic and treatment recommendations might be altered based on the severity of comorbidities, chronic diseases are still being investigated in isolation from one another in most cases. To illustrate the significance of concurrent chronic diseases in the course of treatment, this study uses SEER’s cancer data to create two comorbid data sets: one for breast and female genital cancers and another for prostate and urinal cancers. Several popular machine learning techniques are then applied to the resultant data sets to build predictive models. Comparison of the results shows that having more information about comorbid conditions of patients can improve models’ predictive power, which in turn, can help practitioners make better diagnostic and treatment decisions. Therefore, proper identification, recording, and use of patients’ comorbidity status can potentially lower treatment costs and ease the healthcare related economic challenges.",
"title": ""
},
{
"docid": "2c5ba4f458b3d185f8b73d091a9b696c",
"text": "Community structure is one of the key properties of real-world complex networks. It plays a crucial role in their behaviors and topology. While an important work has been done on the issue of community detection, very little attention has been devoted to the analysis of the community structure. In this paper, we present an extensive investigation of the overlapping community network deduced from a large-scale co-authorship network. The nodes of the overlapping community network rep-resent the functional communities of the co-authorship network, and the links account for the fact that communities share some nodes in the co-authorship network. The comparative evaluation of the topological properties of these two networks shows that they share similar topological properties. These results are very interesting. Indeed, the network of communities seems to be a good representative of the original co-authorship network. With its smaller size, it may be more practical in order to realize various analyses that cannot be performed easily in large-scale real-world networks.",
"title": ""
},
{
"docid": "15fb8b92428ce4f2c06d926fd323e9ef",
"text": "Convolutional Neural Network (CNN) is one of the most effective neural network model for many classification tasks, such as voice recognition, computer vision and biological information processing. Unfortunately, Computation of CNN is both memory-intensive and computation-intensive, which brings a huge challenge to the design of the hardware accelerators. A large number of hardware accelerators for CNN inference are designed by the industry and the academia. Most of the engines are based on 32-bit floating point matrix multiplication, where the data precision is over-provisioned for inference job and the hardware cost are too high. In this paper, a 8-bit fixed-point LeNet inference engine (Laius) is designed and implemented on FPGA. In order to reduce the consumption of FPGA resource, we proposed a methodology to find the optimal bit-length for weight and bias in LeNet, which results in using 8-bit fixed point for most of the computation and using 16-bit fixed point for other computation. The PE (Processing Element) design is proposed. Pipelining and PE tiling technique is use to improve the performance of the inference engine. By theoretical analysis, we came to the conclusion that DSP resource in FPGA is the most critical resource, it should be carefully used during the design process. We implement the inference engine on Xilinx 485t FPGA. Experiment result shows that the designed LeNet inference engine can achieve 44.9 Gops throughput with 8-bit fixed-point operation after pipelining. Moreover, with only 1% loss of accuracy, the 8-bit fixed-point engine largely reduce 31.43% in latency, 87.01% in LUT consumption, 66.50% in BRAM consumption, 65.11% in DSP consumption and 47.95% reduction in power compared to a 32-bit fixed-point inference engine with the same structure.",
"title": ""
},
{
"docid": "ba41dfe1382ae0bc45d82d197b124382",
"text": "Business Intelligence (BI) deals with integrated approaches to management support. Currently, there are constraints to BI adoption and a new era of analytic data management for business intelligence these constraints are the integrated infrastructures that are subject to BI have become complex, costly, and inflexible, the effort required consolidating and cleansing enterprise data and Performance impact on existing infrastructure / inadequate IT infrastructure. So, in this paper Cloud computing will be used as a possible remedy for these issues. We will represent a new environment atmosphere for the business intelligence to make the ability to shorten BI implementation windows, reduced cost for BI programs compared with traditional on-premise BI software, Ability to add environments for testing, proof-of-concepts and upgrades, offer users the potential for faster deployments and increased flexibility. Also, Cloud computing enables organizations to analyze terabytes of data faster and more economically than ever before. Business intelligence (BI) in the cloud can be like a big puzzle. Users can jump in and put together small pieces of the puzzle but until the whole thing is complete the user will lack an overall view of the big picture. In this paper reading each section will fill in a piece of the puzzle.",
"title": ""
},
{
"docid": "122ed18a623510052664996c7ef4b4bb",
"text": "A number of sensor applications in recent years collect data which can be directly associated with human interactions. Some examples of such applications include GPS applications on mobile devices, accelerometers, or location sensors designed to track human and vehicular traffic. Such data lends itself to a variety of rich applications in which one can use the sensor data in order to model the underlying relationships and interactions. This requires the development of trajectory mining techniques, which can mine the GPS data for interesting social patterns. It also leads to a number of challenges, since such data may often be private, and it is important to be able to perform the mining process without violating the privacy of the users. Given the open nature of the information contributed by users in social sensing applications, this also leads to issues of trust in making inferences from the underlying data. In this chapter, we provide a broad survey of the work in this important and rapidly emerging field. We also discuss the key problems which arise in the context of this important field and the corresponding",
"title": ""
},
{
"docid": "bb5dccb965c71fcbb8c4f2f924e65316",
"text": "BACKGROUND AND OBJECTIVES\nBecause skin cancer affects millions of people worldwide, computational methods for the segmentation of pigmented skin lesions in images have been developed in order to assist dermatologists in their diagnosis. This paper aims to present a review of the current methods, and outline a comparative analysis with regards to several of the fundamental steps of image processing, such as image acquisition, pre-processing and segmentation.\n\n\nMETHODS\nTechniques that have been proposed to achieve these tasks were identified and reviewed. As to the image segmentation task, the techniques were classified according to their principle.\n\n\nRESULTS\nThe techniques employed in each step are explained, and their strengths and weaknesses are identified. In addition, several of the reviewed techniques are applied to macroscopic and dermoscopy images in order to exemplify their results.\n\n\nCONCLUSIONS\nThe image segmentation of skin lesions has been addressed successfully in many studies; however, there is a demand for new methodologies in order to improve the efficiency.",
"title": ""
},
{
"docid": "3f6c3f979255b0d8a3f78ecd579a1cca",
"text": "Botnet is most widespread and occurs commonly in today's cyber attacks, resulting in serious threats to our network assets and organization's properties. Botnets are collections of compromised computers (Bots) which are remotely controlled by its originator (BotMaster) under a common Commond-and-Control (C & C) infrastructure. They are used to distribute commands to the Bots for malicious activities such as distributed denial-of-service (DDoS) attacks, sending large amount of SPAM and other nefarious purposes. Understanding the Botnet C & C channels is a critical component to precisely identify, detect, and mitigate the Botnets threats. Therefore, in this paper we provide a classification of Botnets C & C channels and evaluate well-known protocols (e.g. IRC, HTTP, and P2P) which are being used in each of them.",
"title": ""
},
{
"docid": "a0124ccd8586bd082ef4510389269d5d",
"text": "We present a convolutional-neural-network-based system that faithfully colorizes black and white photographic images without direct human assistance. We explore various network architectures, objectives, color spaces, and problem formulations. The final classification-based model we build generates colorized images that are significantly more aesthetically-pleasing than those created by the baseline regression-based model, demonstrating the viability of our methodology and revealing promising avenues for future work.",
"title": ""
},
{
"docid": "0102748c7f9969fb53a3b5ee76b6eefe",
"text": "Face veri cation is the task of deciding by analyzing face images, whether a person is who he/she claims to be. This is very challenging due to image variations in lighting, pose, facial expression, and age. The task boils down to computing the distance between two face vectors. As such, appropriate distance metrics are essential for face veri cation accuracy. In this paper we propose a new method, named the Cosine Similarity Metric Learning (CSML) for learning a distance metric for facial veri cation. The use of cosine similarity in our method leads to an e ective learning algorithm which can improve the generalization ability of any given metric. Our method is tested on the state-of-the-art dataset, the Labeled Faces in the Wild (LFW), and has achieved the highest accuracy in the literature. Face veri cation has been extensively researched for decades. The reason for its popularity is the non-intrusiveness and wide range of practical applications, such as access control, video surveillance, and telecommunication. The biggest challenge in face veri cation comes from the numerous variations of a face image, due to changes in lighting, pose, facial expression, and age. It is a very di cult problem, especially using images captured in totally uncontrolled environment, for instance, images from surveillance cameras, or from the Web. Over the years, many public face datasets have been created for researchers to advance state of the art and make their methods comparable. This practice has proved to be extremely useful. FERET [1] is the rst popular face dataset freely available to researchers. It was created in 1993 and since then research in face recognition has advanced considerably. Researchers have come very close to fully recognizing all the frontal images in FERET [2,3,4,5,6]. However, these methods are not robust to deal with non-frontal face images. Recently a new face dataset named the Labeled Faces in the Wild (LFW) [7] was created. LFW is a full protocol for evaluating face veri cation algorithms. Unlike FERET, LFW is designed for unconstrained face veri cation. Faces in LFW can vary in all possible ways due to pose, lighting, expression, age, scale, and misalignment (Figure 1). Methods for frontal images cannot cope with these variations and as such many researchers have turned to machine learning to 2 Hieu V. Nguyen and Li Bai Fig. 1. From FERET to LFW develop learning based face veri cation methods [8,9]. One of these approaches is to learn a transformation matrix from the data so that the Euclidean distance can perform better in the new subspace. Learning such a transformation matrix is equivalent to learning a Mahalanobis metric in the original space [10]. Xing et al. [11] used semide nite programming to learn a Mahalanobis distance metric for clustering. Their algorithm aims to minimize the sum of squared distances between similarly labeled inputs, while maintaining a lower bound on the sum of distances between di erently labeled inputs. Goldberger et al. [10] proposed Neighbourhood Component Analysis (NCA), a distance metric learning algorithm especially designed to improve kNN classi cation. The algorithm is to learn a Mahalanobis distance by minimizing the leave-one-out cross validation error of the kNN classi er on a training set. Because it uses softmax activation function to convert distance to probability, the gradient computation step is expensive. Weinberger et al. [12] proposed a method that learns a matrix designed to improve the performance of kNN classi cation. The objective function is composed of two terms. The rst term minimizes the distance between target neighbours. The second term is a hinge-loss that encourages target neighbours to be at least one distance unit closer than points from other classes. It requires information about the class of each sample. As a result, their method is not applicable for the restricted setting in LFW (see section 2.1). Recently, Davis et al. [13] have taken an information theoretic approach to learn a Mahalanobis metric under a wide range of possible constraints and prior knowledge on the Mahalanobis distance. Their method regularizes the learned matrix to make it as close as possible to a known prior matrix. The closeness is measured as a Kullback-Leibler divergence between two Gaussian distributions corresponding to the two matrices. In this paper, we propose a new method named Cosine Similarity Metric Learning (CSML). There are two main contributions. The rst contribution is Cosine Similarity Metric Learning for Face Veri cation 3 that we have shown cosine similarity to be an e ective alternative to Euclidean distance in metric learning problem. The second contribution is that CSML can improve the generalization ability of an existing metric signi cantly in most cases. Our method is di erent from all the above methods in terms of distance measures. All of the other methods use Euclidean distance to measure the dissimilarities between samples in the transformed space whilst our method uses cosine similarity which leads to a simple and e ective metric learning method. The rest of this paper is structured as follows. Section 2 presents CSML method in detail. Section 3 present how CSML can be applied to face veri cation. Experimental results are presented in section 4. Finally, conclusion is given in section 5. 1 Cosine Similarity Metric Learning The general idea is to learn a transformation matrix from training data so that cosine similarity performs well in the transformed subspace. The performance is measured by cross validation error (cve). 1.1 Cosine similarity Cosine similarity (CS) between two vectors x and y is de ned as: CS(x, y) = x y ‖x‖ ‖y‖ Cosine similarity has a special property that makes it suitable for metric learning: the resulting similarity measure is always within the range of −1 and +1. As shown in section 1.3, this property allows the objective function to be simple and e ective. 1.2 Metric learning formulation Let {xi, yi, li}i=1 denote a training set of s labeled samples with pairs of input vectors xi, yi ∈ R and binary class labels li ∈ {1, 0} which indicates whether xi and yi match or not. The goal is to learn a linear transformation A : R → R(d ≤ m), which we will use to compute cosine similarities in the transformed subspace as: CS(x, y,A) = (Ax) (Ay) ‖Ax‖ ‖Ay‖ = xAAy √ xTATAx √ yTATAy Speci cally, we want to learn the linear transformation that minimizes the cross validation error when similarities are measured in this way. We begin by de ning the objective function. 4 Hieu V. Nguyen and Li Bai 1.3 Objective function First, we de ne positive and negative sample index sets Pos and Neg as:",
"title": ""
},
{
"docid": "3e77ca4aa346bfe6cf6aacbffdcf344d",
"text": "This paper introduces a shape descriptor, the soft shape context, motivated by the shape context method. Unlike the original shape context method, where each image point was hard assigned into a single histogram bin, we instead allow each image point to contribute to multiple bins, hence more robust to distortions. The soft shape context can easily be integrated into the iterative closest point (ICP) method as an auxiliary feature vector, enriching the representation of an image point from spatial information only, to spatial and shape information. This yields a registration method more robust than the original ICP method. The method is general for 2D shapes. It does not calculate derivatives, hence being able to handle shapes with junctions and discontinuities. We present experimental results to demonstrate the robustness compared with the standard ICP method.",
"title": ""
},
{
"docid": "a935c84adaeeb6f691d65b03dd749c95",
"text": "The use of wearable devices during running has become commonplace. Although there is ongoing research on interaction techniques for use while running, the effects of the resulting interactions on the natural movement patterns have received little attention so far. While previous studies on pedestrians reported increased task load and reduced walking speed while interacting, running movement further restricts interaction and requires minimizing interferences, e.g. to avoid injuries and maximize comfort. In this paper, we aim to shed light on how interacting with wearable devices affects running movement. We present results from a motion-tracking study (N=12) evaluating changes in movement and task load when users interact with a smartphone, a smartwatch, or a pair of smartglasses while running. In our study, smartwatches required less effort than smartglasses when using swipe input, resulted in less interference with the running movement and were preferred overall. From our results, we infer a number of guidelines regarding interaction design targeting runners.",
"title": ""
},
{
"docid": "33e7dea74a2506bce40b8e7f48073c9e",
"text": "Linker for activation of B cells (LAB, also called NTAL; a product of wbscr5 gene) is a newly identified transmembrane adaptor protein that is expressed in B cells, NK cells, and mast cells. Upon BCR activation, LAB is phosphorylated and interacts with Grb2. LAB is capable of rescuing thymocyte development in LAT-deficient mice. To study the in vivo function of LAB, LAB-deficient mice were generated. Although disruption of the Lab gene did not affect lymphocyte development, it caused mast cells to be hyperresponsive to stimulation via the FcepsilonRI, evidenced by enhanced Erk activation, calcium mobilization, degranulation, and cytokine production. These data suggested that LAB negatively regulates mast cell function. However, mast cells that lacked both linker for activation of T cells (LAT) and LAB proteins had a more severe block in FcepsilonRI-mediated signaling than LAT(-/-) mast cells, demonstrating that LAB also shares a redundant function with LAT to play a positive role in FcepsilonRI-mediated signaling.",
"title": ""
}
] | scidocsrr |
981b8ee24864cf71e9ad34c9967065ff | Integrating 3D structure into traffic scene understanding with RGB-D data | [
{
"docid": "5691ca09e609aea46b9fd5e7a83d165a",
"text": "View-based 3-D object retrieval and recognition has become popular in practice, e.g., in computer aided design. It is difficult to precisely estimate the distance between two objects represented by multiple views. Thus, current view-based 3-D object retrieval and recognition methods may not perform well. In this paper, we propose a hypergraph analysis approach to address this problem by avoiding the estimation of the distance between objects. In particular, we construct multiple hypergraphs for a set of 3-D objects based on their 2-D views. In these hypergraphs, each vertex is an object, and each edge is a cluster of views. Therefore, an edge connects multiple vertices. We define the weight of each edge based on the similarities between any two views within the cluster. Retrieval and recognition are performed based on the hypergraphs. Therefore, our method can explore the higher order relationship among objects and does not use the distance between objects. We conduct experiments on the National Taiwan University 3-D model dataset and the ETH 3-D object collection. Experimental results demonstrate the effectiveness of the proposed method by comparing with the state-of-the-art methods.",
"title": ""
}
] | [
{
"docid": "c460179cbdb40b9d89b3cc02276d54e1",
"text": "In recent years the sport of climbing has seen consistent increase in popularity. Climbing requires a complex skill set for successful and safe exercising. While elite climbers receive intensive expert coaching to refine this skill set, this progression approach is not viable for the amateur population. We have developed ClimbAX - a climbing performance analysis system that aims for replicating expert assessments and thus represents a first step towards an automatic coaching system for climbing enthusiasts. Through an accelerometer based wearable sensing platform, climber's movements are captured. An automatic analysis procedure detects climbing sessions and moves, which form the basis for subsequent performance assessment. The assessment parameters are derived from sports science literature and include: power, control, stability, speed. ClimbAX was evaluated in a large case study with 53 climbers under competition settings. We report a strong correlation between predicted scores and official competition results, which demonstrate the effectiveness of our automatic skill assessment system.",
"title": ""
},
{
"docid": "179e5b887f15b4ecf4ba92031a828316",
"text": "High efficiency power supply solutions for data centers are gaining more attention, in order to minimize the fast growing power demands of such loads, the 48V Voltage Regulator Module (VRM) for powering CPU is a promising solution replacing the legacy 12V VRM by which the bus distribution loss, cost and size can be dramatically minimized. In this paper, a two-stage 48V/12V/1.8V–250W VRM is proposed, the first stage is a high efficiency, high power density isolated — unregulated DC/DC converter (DCX) based on LLC resonant converter stepping the input voltage from 48V to 12V. The Matrix transformer concept was utilized for designing the high frequency transformer of the first stage, an enhanced termination loop for the synchronous rectifiers and a non-uniform winding structure is proposed resulting in significant increase in both power density and efficiency of the first stage converter. The second stage is a 4-phases buck converter stepping the voltage from 12V to 1.8V to the CPU. Since the CPU runs in the sleep mode most of the time a light load efficiency improvement method by changing the bus voltage from 12V to 6 V during light load operation is proposed showing more than 8% light load efficiency enhancement than fixed bus voltage. Experimental results demonstrate the high efficiency of the proposed solution reaching peak of 91% with a significant light load efficiency improvement.",
"title": ""
},
{
"docid": "31461de346fb454f296495287600a74f",
"text": "The working hypothesis of the paper is that motor images are endowed with the same properties as those of the (corresponding) motor representations, and therefore have the same functional relationship to the imagined or represented movement and the same causal role in the generation of this movement. The fact that the timing of simulated movements follows the same constraints as that of actually executed movements is consistent with this hypothesis. Accordingly, many neural mechanisms are activated during motor imagery, as revealed by a sharp increase in tendinous reflexes in the limb imagined to move, and by vegetative changes which correlate with the level of mental effort. At the cortical level, a specific pattern of activation, that closely resembles that of action execution, is observed in areas devoted to motor control. This activation might be the substrate for the effects of mental training. A hierarchical model of the organization of action is proposed: this model implies a short-term memory storage of a 'copy' of the various representational steps. These memories are erased when an action corresponding to the represented goal takes place. By contrast, if the action is incompletely or not executed, the whole system remains activated, and the content of the representation is rehearsed. This mechanism would be the substrate for conscious access to this content during motor imagery and mental training.",
"title": ""
},
{
"docid": "e054c2d3b52441eaf801e7d2dd54dce9",
"text": "The concept of centrality is often invoked in social network analysis, and diverse indices have been proposed to measure it. This paper develops a unified framework for the measurement of centrality. All measures of centrality assess a node’s involvement in the walk structure of a network. Measures vary along four key dimensions: type of nodal involvement assessed, type of walk considered, property of walk assessed, and choice of summary measure. If we cross-classify measures by type of nodal involvement (radial versus medial) and property of walk assessed (volume versus length), we obtain a four-fold polychotomization with one cell empty which mirrors Freeman’s 1979 categorization. At a more substantive level, measures of centrality summarize a node’s involvement in or contribution to the cohesiveness of the network. Radial measures in particular are reductions of pair-wise proximities/cohesion to attributes of nodes or actors. The usefulness and interpretability of radial measures depend on the fit of the cohesion matrix to the onedimensional model. In network terms, a network that is fit by a one-dimensional model has a core-periphery structure in which all nodes revolve more or less closely around a single core. This in turn implies that the network does not contain distinct cohesive subgroups. Thus, centrality is shown to be intimately connected with the cohesive subgroup structure of a network. © 2005 Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "bd1fdbfcc0116dcdc5114065f32a883e",
"text": "Thousands of operations are annually guided with computer assisted surgery (CAS) technologies. As the use of these devices is rapidly increasing, the reliability of the devices becomes ever more critical. The problem of accuracy assessment of the devices has thus become relevant. During the past five years, over 200 hazardous situations have been documented in the MAUDE database during operations using these devices in the field of neurosurgery alone. Had the accuracy of these devices been periodically assessed pre-operatively, many of them might have been prevented. The technical accuracy of a commercial navigator enabling the use of both optical (OTS) and electromagnetic (EMTS) tracking systems was assessed in the hospital setting using accuracy assessment tools and methods developed by the authors of this paper. The technical accuracy was obtained by comparing the positions of the navigated tool tip with the phantom accuracy assessment points. Each assessment contained a total of 51 points and a region of surgical interest (ROSI) volume of 120x120x100 mm roughly mimicking the size of the human head. The error analysis provided a comprehensive understanding of the trend of accuracy of the surgical navigator modalities. This study showed that the technical accuracies of OTS and EMTS over the pre-determined ROSI were nearly equal. However, the placement of the particular modality hardware needs to be optimized for the surgical procedure. New applications of EMTS, which does not require rigid immobilization of the surgical area, are suggested.",
"title": ""
},
{
"docid": "48a45f03f31d8fc0daede6603f3b693a",
"text": "This paper presents GelClust, a new software that is designed for processing gel electrophoresis images and generating the corresponding phylogenetic trees. Unlike the most of commercial and non-commercial related softwares, we found that GelClust is very user-friendly and guides the user from image toward dendrogram through seven simple steps. Furthermore, the software, which is implemented in C# programming language under Windows operating system, is more accurate than similar software regarding image processing and is the only software able to detect and correct gel 'smile' effects completely automatically. These claims are supported with experiments.",
"title": ""
},
{
"docid": "e306d50838fc5e140a8c96cd95fd3ca2",
"text": "Customer Relationship Management (CRM) is a strategy that supports an organization’s decision-making process to retain long-term and profitable relationships with its customers. Effective CRM analyses require a detailed data warehouse model that can support various CRM analyses and deep understanding on CRM-related business questions. In this paper, we present a taxonomy of CRM analysis categories. Our CRM taxonomy includes CRM strategies, CRM category analyses, CRM business questions, their potential uses, and key performance indicators (KPIs) for those analysis types. Our CRM taxonomy can be used in selecting and evaluating a data schema for CRM analyses, CRM vendors, CRM strategies, and KPIs.",
"title": ""
},
{
"docid": "860e3c429e6ae709ce9cbc4b6cb148db",
"text": "This paper presents an approach for performance analysis of modern enterprise-class server applications. In our experience, performance bottlenecks in these applications differ qualitatively from bottlenecks in smaller, stand-alone systems. Small applications and benchmarks often suffer from CPU-intensive hot spots. In contrast, enterprise-class multi-tier applications often suffer from problems that manifest not as hot spots, but as idle time, indicating a lack of forward motion. Many factors can contribute to undesirable idle time, including locking problems, excessive system-level activities like garbage collection, various resource constraints, and problems driving load.\n We present the design and methodology for WAIT, a tool to diagnosis the root cause of idle time in server applications. Given lightweight samples of Java activity on a single tier, the tool can often pinpoint the primary bottleneck on a multi-tier system. The methodology centers on an informative abstraction of the states of idleness observed in a running program. This abstraction allows the tool to distinguish, for example, between hold-ups on a database machine, insufficient load, lock contention in application code, and a conventional bottleneck due to a hot method. To compute the abstraction, we present a simple expert system based on an extensible set of declarative rules.\n WAIT can be deployed on the fly, without modifying or even restarting the application. Many groups in IBM have applied the tool to diagnosis performance problems in commercial systems, and we present a number of examples as case studies.",
"title": ""
},
{
"docid": "a8a8656f2f7cdcab79662cb150c8effa",
"text": "As networks grow both in importance and size, there is an increasing need for effective security monitors such as Network Intrusion Detection System to prevent such illicit accesses. Intrusion Detection Systems technology is an effective approach in dealing with the problems of network security. In this paper, we present an intrusion detection model based on hybrid fuzzy logic and neural network. The key idea is to take advantage of different classification abilities of fuzzy logic and neural network for intrusion detection system. The new model has ability to recognize an attack, to differentiate one attack from another i.e. classifying attack, and the most important, to detect new attacks with high detection rate and low false negative. Training and testing data were obtained from the Defense Advanced Research Projects Agency (DARPA) intrusion detection evaluation data set.",
"title": ""
},
{
"docid": "920748fbdcaf91346a40e3bf5ae53d42",
"text": "This sketch presents an improved formalization of automatic caricature that extends a standard approach to account for the population variance of facial features. Caricature is generally considered a rendering that emphasizes the distinctive features of a particular face. A formalization of this idea, which we term “Exaggerating the Difference from the Mean” (EDFM), is widely accepted among caricaturists [Redman 1984] and was first implemented in a groundbreaking computer program by [Brennan 1985]. Brennan’s “Caricature generator” program produced caricatures by manually defining a polyline drawing with topology corresponding to a frontal, mean, face-shape drawing, and then displacing the vertices by a constant factor away from the mean shape. Many psychological studies have applied the “Caricature Generator” or EDFM idea to investigate caricaturerelated issues in face perception [Rhodes 1997].",
"title": ""
},
{
"docid": "e7eb22e4ac65696e3bb2a2611a28e809",
"text": "Cuckoo search (CS) is an efficient swarm-intelligence-based algorithm and significant developments have been made since its introduction in 2009. CS has many advantages due to its simplicity and efficiency in solving highly non-linear optimisation problems with real-world engineering applications. This paper provides a timely review of all the state-of-the-art developments in the last five years, including the discussions of theoretical background and research directions for future development of this powerful algorithm.",
"title": ""
},
{
"docid": "65cae0002bcff888d6514aa2d375da40",
"text": "We study the problem of finding efficiently computable non-degenerate multilinear maps from G1 to G2, where G1 and G2 are groups of the same prime order, and where computing discrete logarithms in G1 is hard. We present several applications to cryptography, explore directions for building such maps, and give some reasons to believe that finding examples with n > 2",
"title": ""
},
{
"docid": "a2fb1ee73713544852292721dce21611",
"text": "Large scale implementation of active RFID tag technology has been restricted by the need for battery replacement. Prolonging battery lifespan may potentially promote active RFID tags which offer obvious advantages over passive RFID systems. This paper explores some opportunities to simulate and develop a prototype RF energy harvester for 2.4 GHz band specifically designed for low power active RFID tag application. This system employs a rectenna architecture which is a receiving antenna attached to a rectifying circuit that efficiently converts RF energy to DC current. Initial ADS simulation results show that 2 V output voltage can be achieved using a 7 stage Cockroft-Walton rectifying circuitry with -4.881 dBm (0.325 mW) output power under -4 dBm (0.398 mW) input RF signal. These results lend support to the idea that RF energy harvesting is indeed promising.",
"title": ""
},
{
"docid": "08c97484fe3784e2f1fd42606b915f83",
"text": "In the present study we manipulated the importance of performing two event-based prospective memory tasks. In Experiment 1, the event-based task was assumed to rely on relatively automatic processes, whereas in Experiment 2 the event-based task was assumed to rely on a more demanding monitoring process. In contrast to the first experiment, the second experiment showed that importance had a positive effect on prospective memory performance. In addition, the occurrence of an importance effect on prospective memory performance seemed to be mainly due to the features of the prospective memory task itself, and not to the characteristics of the ongoing tasks that only influenced the size of the importance effect. The results suggest that importance instructions may improve prospective memory if the prospective task requires the strategic allocation of attentional monitoring resources.",
"title": ""
},
{
"docid": "0e61015f3372ba177acdfcddbd0ffdfb",
"text": "INTRODUCTION\nThere are many challenges to the drug discovery process, including the complexity of the target, its interactions, and how these factors play a role in causing the disease. Traditionally, biophysics has been used for hit validation and chemical lead optimization. With its increased throughput and sensitivity, biophysics is now being applied earlier in this process to empower target characterization and hit finding. Areas covered: In this article, the authors provide an overview of how biophysics can be utilized to assess the quality of the reagents used in screening assays, to validate potential tool compounds, to test the integrity of screening assays, and to create follow-up strategies for compound characterization. They also briefly discuss the utilization of different biophysical methods in hit validation to help avoid the resource consuming pitfalls caused by the lack of hit overlap between biophysical methods. Expert opinion: The use of biophysics early on in the drug discovery process has proven crucial to identifying and characterizing targets of complex nature. It also has enabled the identification and classification of small molecules which interact in an allosteric or covalent manner with the target. By applying biophysics in this manner and at the early stages of this process, the chances of finding chemical leads with novel mechanisms of action are increased. In the future, focused screens with biophysics as a primary readout will become increasingly common.",
"title": ""
},
{
"docid": "51df36570be2707556a8958e16682612",
"text": "Through co-design of Augmented Reality (AR) based teaching material, this research aims to enhance collaborative learning experience in primary school education. It will introduce an interactive AR Book based on primary school textbook using tablets as the real time interface. The development of this AR Book employs co-design methods to involve children, teachers, educators and HCI experts from the early stages of the design process. Research insights from the co-design phase will be implemented in the AR Book design. The final outcome of the AR Book will be evaluated in the classroom to explore its effect on the collaborative experience of primary school students. The research aims to answer the question - Can Augmented Books be designed for primary school students in order to support collaboration? This main research question is divided into two sub-questions as follows - How can co-design methods be applied in designing Augmented Book with and for primary school children? And what is the effect of the proposed Augmented Book on primary school students' collaboration? This research will not only present a practical application of co-designing AR Book for and with primary school children, it will also clarify the benefit of AR for education in terms of collaborative experience.",
"title": ""
},
{
"docid": "d59e64c1865193db3aaecc202f688690",
"text": "Event-related desynchronization/synchronization patterns during right/left motor imagery (MI) are effective features for an electroencephalogram-based brain-computer interface (BCI). As MI tasks are subject-specific, selection of subject-specific discriminative frequency components play a vital role in distinguishing these patterns. This paper proposes a new discriminative filter bank (FB) common spatial pattern algorithm to extract subject-specific FB for MI classification. The proposed method enhances the classification accuracy in BCI competition III dataset IVa and competition IV dataset IIb. Compared to the performance offered by the existing FB-based method, the proposed algorithm offers error rate reductions of 17.42% and 8.9% for BCI competition datasets III and IV, respectively.",
"title": ""
},
{
"docid": "e748162d1e0de342983f7028156b3cf6",
"text": "Modeling cloth with fiber-level geometry can produce highly realistic details. However, rendering fiber-level cloth models not only has a high memory cost but it also has a high computation cost even for offline rendering applications. In this paper we present a real-time fiber-level cloth rendering method for current GPUs. Our method procedurally generates fiber-level geometric details on-the-fly using yarn-level control points for minimizing the data transfer to the GPU. We also reduce the rasterization operations by collectively representing the fibers near the center of each ply that form the yarn structure. Moreover, we employ a level-of-detail strategy to minimize or completely eliminate the generation of fiber-level geometry that would have little or no impact on the final rendered image. Furthermore, we introduce a simple self-shadow computation method that allows lighting with self-shadows using relatively low-resolution shadow maps. We also provide a simple distance-based ambient occlusion approximation as well as an ambient illumination precomputation approach, both of which account for fiber-level self-occlusion of yarn. Finally, we discuss how to use a physical-based shading model with our fiber-level cloth rendering method and how to handle cloth animations with temporal coherency. We demonstrate the effectiveness of our approach by comparing our simplified fiber geometry to procedurally generated references and display knitwear containing more than a hundred million individual fiber curves at real-time frame rates with shadows and ambient occlusion.",
"title": ""
},
{
"docid": "5028d250c60a70c0ed6954581ab6cfa7",
"text": "Social Commerce as a result of the advancement of Social Networking Sites and Web 2.0 is increasing as a new model of online shopping. With techniques to improve the website using AJAX, Adobe Flash, XML, and RSS, Social Media era has changed the internet user behavior to be more communicative and active in internet, they love to share information and recommendation among communities. Social commerce also changes the way people shopping through online. Social commerce will be the new way of online shopping nowadays. But the new challenge is business has to provide the interactive website yet interesting website for internet users, the website should give experience to satisfy their needs. This purpose of research is to analyze the website quality (System Quality, Information Quality, and System Quality) as well as interaction feature (communication feature) impact on social commerce website and customers purchase intention. Data from 134 customers of social commerce website were used to test the model. Multiple linear regression is used to calculate the statistic result while confirmatory factor analysis was also conducted to test the validity from each variable. The result shows that website quality and communication feature are important aspect for customer purchase intention while purchasing in social commerce website.",
"title": ""
},
{
"docid": "5fbd1f14c8f4e8dc82bc86ad8b27c115",
"text": "Computer-animated characters are common in popular culture and have begun to be used as experimental tools in social cognitive neurosciences. Here we investigated how appearance of these characters' influences perception of their actions. Subjects were presented with different characters animated either with motion data captured from human actors or by interpolating between poses (keyframes) designed by an animator, and were asked to categorize the motion as biological or artificial. The response bias towards 'biological', derived from the Signal Detection Theory, decreases with characters' anthropomorphism, while sensitivity is only affected by the simplest rendering style, point-light displays. fMRI showed that the response bias correlates positively with activity in the mentalizing network including left temporoparietal junction and anterior cingulate cortex, and negatively with regions sustaining motor resonance. The absence of significant effect of the characters on the brain activity suggests individual differences in the neural responses to unfamiliar artificial agents. While computer-animated characters are invaluable tools to investigate the neural bases of social cognition, further research is required to better understand how factors such as anthropomorphism affect their perception, in order to optimize their appearance for entertainment, research or therapeutic purposes.",
"title": ""
}
] | scidocsrr |
0bd9c78ab4332552b8a0deee10c732db | Programming models for sensor networks: A survey | [
{
"docid": "f3574f1e3f0ef3a5e1d20cb15b040105",
"text": "Composed of tens of thousands of tiny devices with very limited resources (\"motes\"), sensor networks are subject to novel systems problems and constraints. The large number of motes in a sensor network means that there will often be some failing nodes; networks must be easy to repopulate. Often there is no feasible method to recharge motes, so energy is a precious resource. Once deployed, a network must be reprogrammable although physically unreachable, and this reprogramming can be a significant energy cost.We present Maté, a tiny communication-centric virtual machine designed for sensor networks. Maté's high-level interface allows complex programs to be very short (under 100 bytes), reducing the energy cost of transmitting new programs. Code is broken up into small capsules of 24 instructions, which can self-replicate through the network. Packet sending and reception capsules enable the deployment of ad-hoc routing and data aggregation algorithms. Maté's concise, high-level program representation simplifies programming and allows large networks to be frequently reprogrammed in an energy-efficient manner; in addition, its safe execution environment suggests a use of virtual machines to provide the user/kernel boundary on motes that have no hardware protection mechanisms.",
"title": ""
}
] | [
{
"docid": "0f3cad05c9c267f11c4cebd634a12c59",
"text": "The recent, exponential rise in adoption of the most disparate Internet of Things (IoT) devices and technologies has reached also Agriculture and Food (Agri-Food) supply chains, drumming up substantial research and innovation interest towards developing reliable, auditable and transparent traceability systems. Current IoT-based traceability and provenance systems for Agri-Food supply chains are built on top of centralized infrastructures and this leaves room for unsolved issues and major concerns, including data integrity, tampering and single points of failure. Blockchains, the distributed ledger technology underpinning cryptocurrencies such as Bitcoin, represent a new and innovative technological approach to realizing decentralized trustless systems. Indeed, the inherent properties of this digital technology provide fault-tolerance, immutability, transparency and full traceability of the stored transaction records, as well as coherent digital representations of physical assets and autonomous transaction executions. This paper presents AgriBlockIoT, a fully decentralized, blockchain-based traceability solution for Agri-Food supply chain management, able to seamless integrate IoT devices producing and consuming digital data along the chain. To effectively assess AgriBlockIoT, first, we defined a classical use-case within the given vertical domain, namely from-farm-to-fork. Then, we developed and deployed such use-case, achieving traceability using two different blockchain implementations, namely Ethereum and Hyperledger Sawtooth. Finally, we evaluated and compared the performance of both the deployments, in terms of latency, CPU, and network usage, also highlighting their main pros and cons.",
"title": ""
},
{
"docid": "49fa638e44d13695217c7f1bbb3f6ebd",
"text": "Kernel methods enable the direct usage of structured representations of textual data during language learning and inference tasks. Expressive kernels, such as Tree Kernels, achieve excellent performance in NLP. On the other side, deep neural networks have been demonstrated effective in automatically learning feature representations during training. However, their input is tensor data, i.e., they cannot manage rich structured information. In this paper, we show that expressive kernels and deep neural networks can be combined in a common framework in order to (i) explicitly model structured information and (ii) learn non-linear decision functions. We show that the input layer of a deep architecture can be pre-trained through the application of the Nyström low-rank approximation of kernel spaces. The resulting “kernelized” neural network achieves state-of-the-art accuracy in three different tasks.",
"title": ""
},
{
"docid": "4b68d3c94ef785f80eac9c4c6ca28cfe",
"text": "We address the problem of recovering a common set of covariates that are relevant simultaneously to several classification problems. By penalizing the sum of l2-norms of the blocks of coefficients associated with each covariate across different classification problems, similar sparsity patterns in all models are encouraged. To take computational advantage of the sparsity of solutions at high regularization levels, we propose a blockwise path-following scheme that approximately traces the regularization path. As the regularization coefficient decreases, the algorithm maintains and updates concurrently a growing set of covariates that are simultaneously active for all problems. We also show how to use random projections to extend this approach to the problem of joint subspace selection, where multiple predictors are found in a common low-dimensional subspace. We present theoretical results showing that this random projection approach converges to the solution yielded by trace-norm regularization. Finally, we present a variety of experimental results exploring joint covariate selection and joint subspace selection, comparing the path-following approach to competing algorithms in terms of prediction accuracy and running time.",
"title": ""
},
{
"docid": "54b43b5e3545710dfe37f55b93084e34",
"text": "Cloud computing is a model for delivering information technology services, wherein resources are retrieved from the Internet through web-based tools and applications instead of a direct connection to a server. The capability to provision and release cloud computing resources with minimal management effort or service provider interaction led to the rapid increase of the use of cloud computing. Therefore, balancing cloud computing resources to provide better performance and services to end users is important. Load balancing in cloud computing means balancing three important stages through which a request is processed. The three stages are data center selection, virtual machine scheduling, and task scheduling at a selected data center. User task scheduling plays a significant role in improving the performance of cloud services. This paper presents a review of various energy-efficient task scheduling methods in a cloud environment. A brief analysis of various scheduling parameters considered in these methods is also presented. The results show that the best power-saving percentage level can be achieved by using both DVFS and DNS.",
"title": ""
},
{
"docid": "ca8bb290339946e2d3d3e14c01023aa5",
"text": "OBJECTIVE\nTo establish a centile chart of cervical length between 18 and 32 weeks of gestation in a low-risk population of women.\n\n\nMETHODS\nA prospective longitudinal cohort study of women with a low risk, singleton pregnancy using public healthcare facilities in Cape Town, South Africa. Transvaginal measurement of cervical length was performed between 16 and 32 weeks of gestation and used to construct centile charts. The distribution of cervical length was determined for gestational ages and was used to establish estimates of longitudinal percentiles. Centile charts were constructed for nulliparous and multiparous women together and separately.\n\n\nRESULTS\nCentile estimation was based on data from 344 women. Percentiles showed progressive cervical shortening with increasing gestational age. Averaged over the entire follow-up period, mean cervical length was 1.5 mm shorter in nulliparous women compared with multiparous women (95% CI, 0.4-2.6).\n\n\nCONCLUSIONS\nEstablishment of longitudinal reference values of cervical length in a low-risk population will contribute toward a better understanding of cervical length in women at risk for preterm labor.",
"title": ""
},
{
"docid": "2d0cc17115692f1e72114c636ba74811",
"text": "A new inline coupling topology for narrowband helical resonator filters is proposed that allows to introduce selectively located transmission zeros (TZs) in the stopband. We show that a pair of helical resonators arranged in an interdigital configuration can realize a large range of in-band coupling coefficient values and also selectively position a TZ in the stopband. The proposed technique dispenses the need for auxiliary elements, so that the size, complexity, power handling and insertion loss of the filter are not compromised. A second order prototype filter with dimensions of the order of 0.05λ, power handling capability up to 90 W, measured insertion loss of 0.18 dB and improved selectivity is presented.",
"title": ""
},
{
"docid": "b5d3c7822f2ba9ca89d474dda5f180b6",
"text": "We consider a class of a nested optimization problems involving inner and outer objectives. We observe that by taking into explicit account the optimization dynamics for the inner objective it is possible to derive a general framework that unifies gradient-based hyperparameter optimization and meta-learning (or learning-to-learn). Depending on the specific setting, the variables of the outer objective take either the meaning of hyperparameters in a supervised learning problem or parameters of a meta-learner. We show that some recently proposed methods in the latter setting can be instantiated in our framework and tackled with the same gradient-based algorithms. Finally, we discuss possible design patterns for learning-to-learn and present encouraging preliminary experiments for few-shot learning.",
"title": ""
},
{
"docid": "d8752c40782d8189d454682d1d30738e",
"text": "This article reviews the empirical literature on personality, leadership, and organizational effectiveness to make 3 major points. First, leadership is a real and vastly consequential phenomenon, perhaps the single most important issue in the human sciences. Second, leadership is about the performance of teams, groups, and organizations. Good leadership promotes effective team and group performance, which in turn enhances the well-being of the incumbents; bad leadership degrades the quality of life for everyone associated with it. Third, personality predicts leadership—who we are is how we lead—and this information can be used to select future leaders or improve the performance of current incumbents.",
"title": ""
},
{
"docid": "1461157186183f11d7270d89eecd926a",
"text": "This review analyzes trends and commonalities among prominent theories of media effects. On the basis of exemplary meta-analyses of media effects and bibliometric studies of well-cited theories, we identify and discuss five features of media effects theories as well as their empirical support. Each of these features specifies the conditions under which media may produce effects on certain types of individuals. Our review ends with a discussion of media effects in newer media environments. This includes theories of computer-mediated communication, the development of which appears to share a similar pattern of reformulation from unidirectional, receiver-oriented views, to theories that recognize the transactional nature of communication. We conclude by outlining challenges and promising avenues for future research.",
"title": ""
},
{
"docid": "1a69b777e03d2d2589dd9efb9cda2a10",
"text": "Three-dimensional measurement of joint motion is a promising tool for clinical evaluation and therapeutic treatment comparisons. Although many devices exist for joints kinematics assessment, there is a need for a system that could be used in routine practice. Such a system should be accurate, ambulatory, and easy to use. The combination of gyroscopes and accelerometers (i.e., inertial measurement unit) has proven to be suitable for unrestrained measurement of orientation during a short period of time (i.e., few minutes). However, due to their inability to detect horizontal reference, inertial-based systems generally fail to measure differential orientation, a prerequisite for computing the three-dimentional knee joint angle recommended by the Internal Society of Biomechanics (ISB). A simple method based on a leg movement is proposed here to align two inertial measurement units fixed on the thigh and shank segments. Based on the combination of the former alignment and a fusion algorithm, the three-dimensional knee joint angle is measured and compared with a magnetic motion capture system during walking. The proposed system is suitable to measure the absolute knee flexion/extension and abduction/adduction angles with mean (SD) offset errors of -1 degree (1 degree ) and 0 degrees (0.6 degrees ) and mean (SD) root mean square (RMS) errors of 1.5 degrees (0.4 degrees ) and 1.7 degrees (0.5 degrees ). The system is also suitable for the relative measurement of knee internal/external rotation (mean (SD) offset error of 3.4 degrees (2.7 degrees )) with a mean (SD) RMS error of 1.6 degrees (0.5 degrees ). The method described in this paper can be easily adapted in order to measure other joint angular displacements such as elbow or ankle.",
"title": ""
},
{
"docid": "88def96b7287ce217f1abf8fb1b413a5",
"text": "Designing a metric manually for unsupervised sequence generation tasks, such as text generation, is essentially difficult. In a such situation, learning a metric of a sequence from data is one possible solution. The previous study, SeqGAN, proposed the framework for unsupervised sequence generation, in which a metric is learned from data, and a generator is optimized with regard to the learned metric with policy gradient, inspired by generative adversarial nets (GANs) and reinforcement learning. In this paper, we make two proposals to learn better metric than SeqGAN’s: partial reward function and expert-based reward function training. The partial reward function is a reward function for a partial sequence of a certain length. SeqGAN employs a reward function for completed sequence only. By combining long-scale and short-scale partial reward functions, we expect a learned metric to be able to evaluate a partial correctness as well as a coherence of a sequence, as a whole. In expert-based reward function training, a reward function is trained to discriminate between an expert (or true) sequence and a fake sequence that is produced by editing an expert sequence. Expert-based reward function training is not a kind of GAN frameworks. This makes the optimization of the generator easier. We examine the effect of the partial reward function and expert-based reward function training on synthetic data and real text data, and show improvements over SeqGAN and the model trained with MLE. Specifically, whereas SeqGAN gains 0.42 improvement of NLL over MLE on synthetic data, our best model gains 3.02 improvement, and whereas SeqGAN gains 0.029 improvement of BLEU over MLE, our best model gains 0.250 improvement.",
"title": ""
},
{
"docid": "2de3078c249eb87b041a2a74b6efcfdf",
"text": "To lay the groundwork for devising, improving and implementing strategies to prevent or delay the onset of disability in the elderly, we conducted a systematic literature review of longitudinal studies published between 1985 and 1997 that reported statistical associations between individual base-line risk factors and subsequent functional status in community-living older persons. Functional status decline was defined as disability or physical function limitation. We used MEDLINE, PSYCINFO, SOCA, EMBASE, bibliographies and expert consultation to select the articles, 78 of which met the selection criteria. Risk factors were categorized into 14 domains and coded by two independent abstractors. Based on the methodological quality of the statistical analyses between risk factors and functional outcomes (e.g. control for base-line functional status, control for confounding, attrition rate), the strength of evidence was derived for each risk factor. The association of functional decline with medical findings was also analyzed. The highest strength of evidence for an increased risk in functional status decline was found for (alphabetical order) cognitive impairment, depression, disease burden (comorbidity), increased and decreased body mass index, lower extremity functional limitation, low frequency of social contacts, low level of physical activity, no alcohol use compared to moderate use, poor self-perceived health, smoking and vision impairment. The review revealed that some risk factors (e.g. nutrition, physical environment) have been neglected in past research. This review will help investigators set priorities for future research of the Disablement Process, plan health and social services for elderly persons and develop more cost-effective programs for preventing disability among them.",
"title": ""
},
{
"docid": "96af2e34acf9f1e9c0c57cc24795d0f9",
"text": "Poker games provide a useful testbed for modern Artificial Intelligence techniques. Unlike many classical game domains such as chess and checkers, poker includes elements of imperfect information, stochastic events, and one or more adversarial agents to interact with. Furthermore, in poker it is possible to win or lose by varying degrees. Therefore, it can be advantageous to adapt ones’ strategy to exploit a weak opponent. A poker agent must address these challenges, acting in uncertain environments and exploiting other agents, in order to be highly successful. Arguably, poker games more closely resemble many real world problems than games with perfect information. In this brief paper, we outline Polaris, a Texas Hold’em poker program. Polaris recently defeated top human professionals at the Man vs. Machine Poker Championship and it is currently the reigning AAAI Computer Poker Competition winner in the limit equilibrium and no-limit events.",
"title": ""
},
{
"docid": "80c9f1d983bc3ddfd73cdf2abc936600",
"text": "Jazz guitar solos are improvised melody lines played on one instrument on top of a chordal accompaniment (comping). As the improvisation happens spontaneously, a reference score is non-existent, only a lead sheet. There are situations, however, when one would like to have the original melody lines in the form of notated music, see the Real Book. The motivation is either for the purpose of practice and imitation or for musical analysis. In this work, an automatic transcriber for jazz guitar solos is developed. It resorts to a very intuitive representation of tonal music signals: the pitchgram. No instrument-specific modeling is involved, so the transcriber should be applicable to other pitched instruments as well. Neither is there the need to learn any note profiles prior to or during the transcription. Essentially, the proposed transcriber is a decision tree, thus a classifier, with a depth of 3. It has a (very) low computational complexity and can be run on-line. The decision rules can be refined or extended with no or little musical education. The transcriber’s performance is evaluated on a set of ten jazz solo excerpts and compared with a state-of-the-art transcription system for the guitar plus PYIN. We achieve an improvement of 34 % w.r.t. the reference system and 19 % w.r.t. PYIN in terms of the F-measure. Another measure of accuracy, the error score, attests that the number of erroneous pitch detections is reduced by more than 50 % w.r.t. the reference system and by 45 % w.r.t. PYIN.",
"title": ""
},
{
"docid": "c0cbea5f38a04e0d123fc51af30d08c0",
"text": "This brief presents a high-efficiency current-regulated charge pump for a white light-emitting diode driver. The charge pump incorporates no series current regulator, unlike conventional voltage charge pump circuits. Output current regulation is accomplished by the proposed pumping current control. The experimental system, with two 1-muF flying and load capacitors, delivers a regulated 20-mA current from an input supply voltage of 2.8-4.2 V. The measured variation is less than 0.6% at a pumping frequency of 200 kHz. The active area of the designed chip is 0.43 mm2 in a 0.5-mum CMOS process.",
"title": ""
},
{
"docid": "334e97a1f50b5081ac08651c1d7ed943",
"text": "Veterans of all war eras have a high rate of chronic disease, mental health disorders, and chronic multi-symptom illnesses (CMI).(1-3) Many veterans report symptoms that affect multiple biological systems as opposed to isolated disease states. Standard medical treatments often target isolated disease states such as headaches, insomnia, or back pain and at times may miss the more complex, multisystem dysfunction that has been documented in the veteran population. Research has shown that veterans have complex symptomatology involving physical, cognitive, psychological, and behavioral disturbances, such as difficult to diagnose pain patterns, irritable bowel syndrome, chronic fatigue, anxiety, depression, sleep disturbance, or neurocognitive dysfunction.(2-4) Meditation and acupuncture are each broad-spectrum treatments designed to target multiple biological systems simultaneously, and thus, may be well suited for these complex chronic illnesses. The emerging literature indicates that complementary and integrative medicine (CIM) approaches augment standard medical treatments to enhance positive outcomes for those with chronic disease, mental health disorders, and CMI.(5-12.)",
"title": ""
},
{
"docid": "a6a98d0599c1339c1f2c6a6c7525b843",
"text": "We consider a generalized version of the Steiner problem in graphs, motivated by the wire routing phase in physical VLSI design: given a connected, undirected distance graph with required classes of vertices and Steiner vertices, find a shortest connected subgraph containing at least one vertex of each required class. We show that this problem is NP-hard, even if there are no Steiner vertices and the graph is a tree. Moreover, the same complexity result holds if the input class Steiner graph additionally is embedded in a unit grid, if each vertex has degree at most three, and each class consists of no more than three vertices. For similar restricted versions, we prove MAX SNP-hardness and we show that there exists no polynomial-time approximation algorithm with a constant bound on the relative error, unless P = NP. We propose two efficient heuristics computing different approximate solutions in time 0(/E] + /VI log IV]) and in time O(c(lEl + IV1 log (VI)), respectively, where E is the set of edges in the given graph, V is the set of vertices, and c is the number of classes. We present some promising implementation results.",
"title": ""
},
{
"docid": "c9f2fd6bdcca5e55c5c895f65768e533",
"text": "We implemented live-textured geometry model creation with immediate coverage feedback visualizations in AR on the Microsoft HoloLens. A user walking and looking around a physical space can create a textured model of the space, ready for remote exploration and AR collaboration. Out of the box, a HoloLens builds a triangle mesh of the environment while scanning and being tracked in a new environment. The mesh contains vertices, triangles, and normals, but not color. We take the video stream from the color camera and use it to color a UV texture to be mapped to the mesh. Due to the limited graphics memory of the HoloLens, we use a fixed-size texture. Since the mesh generation dynamically changes in real time, we use an adaptive mapping scheme that evenly distributes every triangle of the dynamic mesh onto the fixed-size texture and adapts to new geometry without compromising existing color data. Occlusion is also considered. The user can walk around their environment and continuously fill in the texture while growing the mesh in real-time. We describe our texture generation algorithm and illustrate benefits and limitations of our system with example modeling sessions. Having first-person immediate AR feedback on the quality of modeled physical infrastructure, both in terms of mesh resolution and texture quality, helps the creation of high-quality colored meshes with this standalone wireless device and a fixed memory footprint in real-time.",
"title": ""
},
{
"docid": "160726aa34ba677292a2ae14666727e8",
"text": "Child sex tourism is an obscure industry where the tourist‟s primary purpose is to engage in a sexual experience with a child. Under international legislation, tourism with the intent of having sexual relations with a minor is in violation of the UN Convention of the Rights of a Child. The intent and act is a crime and in violation of human rights. This paper examines child sex tourism in the Philippines, a major destination country for the purposes of child prostitution. The purpose is to bring attention to the atrocities that occur under the guise of tourism. It offers a definition of the crisis, a description of the victims and perpetrators, and a discussion of the social and cultural factors that perpetuate the problem. Research articles and reports from non-government organizations, advocacy groups, governments and educators were examined. Although definitional challenges did emerge, it was found that several of the articles and reports varied little in their definitions of child sex tourism and in the descriptions of the victims and perpetrators. A number of differences emerged that identified the social and cultural factors responsible for the creation and perpetuation of the problem.",
"title": ""
}
] | scidocsrr |
dbd0d01702a50dcaab924ba4033ab378 | An information theoretical approach to prefrontal executive function | [
{
"docid": "5dde27787ee92c2e56729b25b9ca4311",
"text": "The prefrontal cortex (PFC) subserves cognitive control: the ability to coordinate thoughts or actions in relation with internal goals. Its functional architecture, however, remains poorly understood. Using brain imaging in humans, we showed that the lateral PFC is organized as a cascade of executive processes from premotor to anterior PFC regions that control behavior according to stimuli, the present perceptual context, and the temporal episode in which stimuli occur, respectively. The results support an unified modular model of cognitive control that describes the overall functional organization of the human lateral PFC and has basic methodological and theoretical implications.",
"title": ""
}
] | [
{
"docid": "594bbdf08b7c3d0a31b2b0f60e50bae3",
"text": "This paper concerns the behavior of spatially extended dynamical systems —that is, systems with both temporal and spatial degrees of freedom. Such systems are common in physics, biology, and even social sciences such as economics. Despite their abundance, there is little understanding of the spatiotemporal evolution of these complex systems. ' Seemingly disconnected from this problem are two widely occurring phenomena whose very generality require some unifying underlying explanation. The first is a temporal effect known as 1/f noise or flicker noise; the second concerns the evolution of a spatial structure with scale-invariant, self-similar (fractal) properties. Here we report the discovery of a general organizing principle governing a class of dissipative coupled systems. Remarkably, the systems evolve naturally toward a critical state, with no intrinsic time or length scale. The emergence of the self-organized critical state provides a connection between nonlinear dynamics, the appearance of spatial self-similarity, and 1/f noise in a natural and robust way. A short account of some of these results has been published previously. The usual strategy in physics is to reduce a given problem to one or a few important degrees of freedom. The effect of coupling between the individual degrees of freedom is usually dealt with in a perturbative manner —or in a \"mean-field manner\" where the surroundings act on a given degree of freedom as an external field —thus again reducing the problem to a one-body one. In dynamics theory one sometimes finds that complicated systems reduce to a few collective degrees of freedom. This \"dimensional reduction'* has been termed \"selforganization, \" or the so-called \"slaving principle, \" and much insight into the behavior of dynamical systems has been achieved by studying the behavior of lowdimensional at tractors. On the other hand, it is well known that some dynamical systems act in a more concerted way, where the individual degrees of freedom keep each other in a more or less stab1e balance, which cannot be described as a \"perturbation\" of some decoupled state, nor in terms of a few collective degrees of freedom. For instance, ecological systems are organized such that the different species \"support\" each other in a way which cannot be understood by studying the individual constituents in isolation. The same interdependence of species also makes the ecosystem very susceptible to small changes or \"noise.\" However, the system cannot be too sensitive since then it could not have evolved into its present state in the first place. Owing to this balance we may say that such a system is \"critical. \" We shall see that this qualitative concept of criticality can be put on a firm quantitative basis. Such critical systems are abundant in nature. We shaB see that the dynamics of a critical state has a specific ternporal fingerprint, namely \"flicker noise, \" in which the power spectrum S(f) scales as 1/f at low frequencies. Flicker noise is characterized by correlations extended over a wide range of time scales, a clear indication of some sort of cooperative effect. Flicker noise has been observed, for example, in the light from quasars, the intensity of sunspots, the current through resistors, the sand flow in an hour glass, the flow of rivers such as the Nile, and even stock exchange price indices. ' All of these may be considered to be extended dynamical systems. Despite the ubiquity of flicker noise, its origin is not well understood. Indeed, one may say that because of its ubiquity, no proposed mechanism to data can lay claim as the single general underlying root of 1/f noise. We shall argue that flicker noise is in fact not noise but reflects the intrinsic dynamics of self-organized critical systems. Another signature of criticality is spatial selfsimilarity. It has been pointed out that nature is full of self-similar \"fractal\" structures, though the physical reason for this is not understood. \" Most notably, the whole universe is an extended dynamical system where a self-similar cosmic string structure has been claimed. Turbulence is a phenomenon where self-similarity is believed to occur in both space and time. Cooperative critical phenomena are well known in the context of phase transitions in equilibrium statistical mechanics. ' At the transition point, spatial selfsirnilarity occurs, and the dynamical response function has a characteristic power-law \"1/f\" behavior. (We use quotes because often flicker noise involves frequency spectra with dependence f ~ with P only roughly equal to 1.0.) Low-dimensional nonequilibrium dynamical systems also undergo phase transitions (bifurcations, mode locking, intermittency, etc.) where the properties of the attractors change. However, the critical point can be reached only by fine tuning a parameter (e.g. , temperature), and so may occur only accidentally in nature: It",
"title": ""
},
{
"docid": "3fcce3664db5812689c121138e2af280",
"text": "We examine and compare simulation-based algorithms for solving the agent scheduling problem in a multiskill call center. This problem consists in minimizing the total costs of agents under constraints on the expected service level per call type, per period, and aggregated. We propose a solution approach that combines simulation with integer or linear programming, with cut generation. In our numerical experiments with realistic problem instances, this approach performs better than all other methods proposed previously for this problem. We also show that the two-step approach, which is the standard method for solving this problem, sometimes yield solutions that are highly suboptimal and inferior to those obtained by our proposed method. 2009 Published by Elsevier B.V.",
"title": ""
},
{
"docid": "63c2662fdac3258587c5b1baa2133df9",
"text": "Automatic design via Bayesian optimization holds great promise given the constant increase of available data across domains. However, it faces difficulties from high-dimensional, potentially discrete, search spaces. We propose to probabilistically embed inputs into a lower dimensional, continuous latent space, where we perform gradient-based optimization guided by a Gaussian process. Building on variational autoncoders, we use both labeled and unlabeled data to guide the encoding and increase its accuracy. In addition, we propose an adversarial extension to render the latent representation invariant with respect to specific design attributes, which allows us to transfer these attributes across structures. We apply the framework both to a functional-protein dataset and to perform optimization of drag coefficients directly over high-dimensional shapes without incorporating domain knowledge or handcrafted features.",
"title": ""
},
{
"docid": "072b17732d8b628d3536e7045cd0047d",
"text": "In this paper, we propose a high-speed parallel 128 bit multiplier for Ghash Function in conjunction with its FPGA implementation. Through the use of Verilog the designs are evaluated by using Xilinx Vertax5 with 65nm technic and 30,000 logic cells. The highest throughput of 30.764Gpbs can be achieved on virtex5 with the consumption of 8864 slices LUT. The proposed design of the multiplier can be utilized as a design IP core for the implementation of the Ghash Function. The architecture of the multiplier can also apply in more general polynomial basis. Moreover it can be used as arithmetic module in other encryption field.",
"title": ""
},
{
"docid": "561b37c506657693d27fa65341faf51e",
"text": "Currently, much of machine learning is opaque, just like a “black box”. However, in order for humans to understand, trust and effectively manage the emerging AI systems, an AI needs to be able to explain its decisions and conclusions. In this paper, I propose an argumentation-based approach to explainable AI, which has the potential to generate more comprehensive explanations than existing approaches.",
"title": ""
},
{
"docid": "f8e3b21fd5481137a80063e04e9b5488",
"text": "On the basis of the notion that the ability to exert self-control is critical to the regulation of aggressive behaviors, we suggest that mindfulness, an aspect of the self-control process, plays a key role in curbing workplace aggression. In particular, we note the conceptual and empirical distinctions between dimensions of mindfulness (i.e., mindful awareness and mindful acceptance) and investigate their respective abilities to regulate workplace aggression. In an experimental study (Study 1), a multiwave field study (Study 2a), and a daily diary study (Study 2b), we established that the awareness dimension, rather than the acceptance dimension, of mindfulness plays a more critical role in attenuating the association between hostility and aggression. In a second multiwave field study (Study 3), we found that mindful awareness moderates the association between hostility and aggression by reducing the extent to which individuals use dysfunctional emotion regulation strategies (i.e., surface acting), rather than by reducing the extent to which individuals engage in dysfunctional thought processes (i.e., rumination). The findings are discussed in terms of the implications of differentiating the dimensions and mechanisms of mindfulness for regulating workplace aggression. (PsycINFO Database Record",
"title": ""
},
{
"docid": "4502ba935124c2daa9a49fc24ec5865b",
"text": "Medical image processing is the most challenging and emerging field now a day’s. In this field, detection of brain tumor from MRI brain scan has become one of the most challenging problems, due to complex structure of brain. The quantitative analysis of MRI brain tumor allows obtaining useful key indicators of disease progression. A computer aided diagnostic system has been proposed here for detecting the tumor texture in biological study. This is an attempt made which describes the proposed strategy for detection of tumor with the help of segmentation techniques in MATLAB; which incorporates preprocessing stages of noise removal, image enhancement and edge detection. Processing stages includes segmentation like intensity and watershed based segmentation, thresholding to extract the area of unwanted cells from the whole image. Here algorithms are proposed to calculate area and percentage of the tumor. Keywords— MRI, FCM, MKFCM, SVM, Otsu, threshold, fudge factor",
"title": ""
},
{
"docid": "11c245ca7bc133155ff761374dfdea6e",
"text": "Received Nov 12, 2017 Revised Jan 20, 2018 Accepted Feb 11, 2018 In this paper, a modification of PVD (Pixel Value Differencing) algorithm is used for Image Steganography in spatial domain. It is normalizing secret data value by encoding method to make the new pixel edge difference less among three neighbors (horizontal, vertical and diagonal) and embedding data only to less intensity pixel difference areas or regions. The proposed algorithm shows a good improvement for both color and gray-scale images compared to other algorithms. Color images performances are better than gray images. However, in this work the focus is mainly on gray images. The strenght of this scheme is that any random hidden/secret data do not make any shuttle differences to Steg-image compared to original image. The bit plane slicing is used to analyze the maximum payload that has been embeded into the cover image securely. The simulation results show that the proposed algorithm is performing better and showing great consistent results for PSNR, MSE values of any images, also against Steganalysis attack.",
"title": ""
},
{
"docid": "05b1be7a90432eff4b62675826b77e09",
"text": "People invest time, attention, and emotion while engaging in various activities in the real-world, for either purposes of awareness or participation. Social media platforms such as Twitter offer tremendous opportunities for people to become engaged in such real-world events through information sharing and communicating about these events. However, little is understood about the factors that affect people’s Twitter engagement in such real-world events. In this paper, we address this question by first operationalizing a person’s Twitter engagement in real-world events such as posting, retweeting, or replying to tweets about such events. Next, we construct statistical models that examine multiple predictive factors associated with four different perspectives of users’ Twitter engagement, and quantify their potential influence on predicting the (i) presence; and (ii) degree – of the user’s engagement with 643 real-world events. We also consider the effect of these factors with respect to a finer granularization of the different categories of events. We find that the measures of people’s prior Twitter activities, topical interests, geolocation, and social network structures are all variously correlated to their engagement with real-world events.",
"title": ""
},
{
"docid": "d6f322f4dd7daa9525f778ead18c8b5e",
"text": "Face perception, perhaps the most highly developed visual skill in humans, is mediated by a distributed neural system in humans that is comprised of multiple, bilateral regions. We propose a model for the organization of this system that emphasizes a distinction between the representation of invariant and changeable aspects of faces. The representation of invariant aspects of faces underlies the recognition of individuals, whereas the representation of changeable aspects of faces, such as eye gaze, expression, and lip movement, underlies the perception of information that facilitates social communication. The model is also hierarchical insofar as it is divided into a core system and an extended system. The core system is comprised of occipitotemporal regions in extrastriate visual cortex that mediate the visual analysis of faces. In the core system, the representation of invariant aspects is mediated more by the face-responsive region in the fusiform gyrus, whereas the representation of changeable aspects is mediated more by the face-responsive region in the superior temporal sulcus. The extended system is comprised of regions from neural systems for other cognitive functions that can be recruited to act in concert with the regions in the core system to extract meaning from faces.",
"title": ""
},
{
"docid": "8a1e94245d8fbdaf97402923d4dbc213",
"text": "This is the first study to measure the 'sense of community' reportedly offered by the CrossFit gym model. A cross-sectional study adapted Social Capital and General Belongingness scales to compare perceptions of a CrossFit gym and a traditional gym. CrossFit gym members reported significantly higher levels of social capital (both bridging and bonding) and community belongingness compared with traditional gym members. However, regression analysis showed neither social capital, community belongingness, nor gym type was an independent predictor of gym attendance. Exercise and health professionals may benefit from evaluating further the 'sense of community' offered by gym-based exercise programmes.",
"title": ""
},
{
"docid": "840d4b26eec402038b9b3462fc0a98ac",
"text": "A bench model of the new generation intelligent universal transformer (IUT) has been recently developed for distribution applications. The distribution IUT employs high-voltage semiconductor device technologies along with multilevel converter circuits for medium-voltage grid connection. This paper briefly describes the basic operation of the IUT and its experimental setup. Performances under source and load disturbances are characterized with extensive tests using a voltage sag generator and various linear and nonlinear loads. Experimental results demonstrate that IUT input and output can avoid direct impact from its opposite side disturbances. The output voltage is well regulated when the voltage sag is applied to the input. The input voltage and current maintains clean sinusoidal and unity power factor when output is nonlinear load. Under load transients, the input and output voltages remain well regulated. These key features prove that the power quality performance of IUT is far superior to that of conventional copper-and-iron based transformers",
"title": ""
},
{
"docid": "e6dba9e9ad2db632caed6b19b9f5a010",
"text": "Efficient and accurate similarity searching on a large time series data set is an important but non- trivial problem. In this work, we propose a new approach to improve the quality of similarity search on time series data by combining symbolic aggregate approximation (SAX) and piecewise linear approximation. The approach consists of three steps: transforming real valued time series sequences to symbolic strings via SAX, pattern matching on the symbolic strings and a post-processing via Piecewise Linear Approximation.",
"title": ""
},
{
"docid": "d6cf367f29ed1c58fb8fd0b7edf69458",
"text": "Diabetes mellitus is a chronic disease that leads to complications including heart disease, stroke, kidney failure, blindness and nerve damage. Type 2 diabetes, characterized by target-tissue resistance to insulin, is epidemic in industrialized societies and is strongly associated with obesity; however, the mechanism by which increased adiposity causes insulin resistance is unclear. Here we show that adipocytes secrete a unique signalling molecule, which we have named resistin (for resistance to insulin). Circulating resistin levels are decreased by the anti-diabetic drug rosiglitazone, and increased in diet-induced and genetic forms of obesity. Administration of anti-resistin antibody improves blood sugar and insulin action in mice with diet-induced obesity. Moreover, treatment of normal mice with recombinant resistin impairs glucose tolerance and insulin action. Insulin-stimulated glucose uptake by adipocytes is enhanced by neutralization of resistin and is reduced by resistin treatment. Resistin is thus a hormone that potentially links obesity to diabetes.",
"title": ""
},
{
"docid": "641d09ff15b731b679dbe3e9004c1578",
"text": "In recent years, geological disposal of radioactive waste has focused on placement of highand intermediate-level wastes in mined underground caverns at depths of 500–800 m. Notwithstanding the billions of dollars spent to date on this approach, the difficulty of finding suitable sites and demonstrating to the public and regulators that a robust safety case can be developed has frustrated attempts to implement disposal programmes in several countries, and no disposal facility for spent nuclear fuel exists anywhere. The concept of deep borehole disposal was first considered in the 1950s, but was rejected as it was believed to be beyond existing drilling capabilities. Improvements in drilling and associated technologies and advances in sealing methods have prompted a re-examination of this option for the disposal of high-level radioactive wastes, including spent fuel and plutonium. Since the 1950s, studies of deep boreholes have involved minimal investment. However, deep borehole disposal offers a potentially safer, more secure, cost-effective and environmentally sound solution for the long-term management of high-level radioactive waste than mined repositories. Potentially it could accommodate most of the world’s spent fuel inventory. This paper discusses the concept, the status of existing supporting equipment and technologies and the challenges that remain.",
"title": ""
},
{
"docid": "ab677299ffa1e6ae0f65daf5de75d66c",
"text": "This paper proposes a new theory of the relationship between the sentence processing mechanism and the available computational resources. This theory--the Syntactic Prediction Locality Theory (SPLT)--has two components: an integration cost component and a component for the memory cost associated with keeping track of obligatory syntactic requirements. Memory cost is hypothesized to be quantified in terms of the number of syntactic categories that are necessary to complete the current input string as a grammatical sentence. Furthermore, in accordance with results from the working memory literature both memory cost and integration cost are hypothesized to be heavily influenced by locality (1) the longer a predicted category must be kept in memory before the prediction is satisfied, the greater is the cost for maintaining that prediction; and (2) the greater the distance between an incoming word and the most local head or dependent to which it attaches, the greater the integration cost. The SPLT is shown to explain a wide range of processing complexity phenomena not previously accounted for under a single theory, including (1) the lower complexity of subject-extracted relative clauses compared to object-extracted relative clauses, (2) numerous processing overload effects across languages, including the unacceptability of multiply center-embedded structures, (3) the lower complexity of cross-serial dependencies relative to center-embedded dependencies, (4) heaviness effects, such that sentences are easier to understand when larger phrases are placed later and (5) numerous ambiguity effects, such as those which have been argued to be evidence for the Active Filler Hypothesis.",
"title": ""
},
{
"docid": "e7f91b90eab54dfd7f115a3a0225b673",
"text": "The recent trend of outsourcing network functions, aka. middleboxes, raises confidentiality and integrity concern on redirected packet, runtime state, and processing result. The outsourced middleboxes must be protected against cyber attacks and malicious service provider. It is challenging to simultaneously achieve strong security, practical performance, complete functionality and compatibility. Prior software-centric approaches relying on customized cryptographic primitives fall short of fulfilling one or more desired requirements. In this paper, after systematically addressing key challenges brought to the fore, we design and build a secure SGX-assisted system, LightBox, which supports secure and generic middlebox functions, efficient networking, and most notably, lowoverhead stateful processing. LightBox protects middlebox from powerful adversary, and it allows stateful network function to run at nearly native speed: it adds only 3μs packet processing delay even when tracking 1.5M concurrent flows.",
"title": ""
},
{
"docid": "684b9d64f4476a6b9dd3df1bd18bcb1d",
"text": "We present the cases of three children with patent ductus arteriosus (PDA), pulmonary arterial hypertension (PAH), and desaturation. One of them had desaturation associated with atrial septal defect (ASD). His ASD, PAH, and desaturation improved after successful device closure of the PDA. The other two had desaturation associated with Down syndrome. One had desaturation only at room air oxygen (21% oxygen) but well saturated with 100% oxygen, subsequently underwent successful device closure of the PDA. The other had experienced desaturation at a younger age but spontaneously recovered when he was older, following attempted device closure of the PDA, with late embolization of the device.",
"title": ""
},
{
"docid": "527e750a6047100cba1f78a3036acb9b",
"text": "This paper presents a Generative Adversarial Network (GAN) to model multi-turn dialogue generation, which trains a latent hierarchical recurrent encoder-decoder simultaneously with a discriminative classifier that make the prior approximate to the posterior. Experiments show that our model achieves better results.",
"title": ""
},
{
"docid": "27ddea786e06ffe20b4f526875cdd76b",
"text": "It , is generally unrecognized that Sigmund Freud's contribution to the scientific understanding of dreams derived from a radical reorientation to the dream experience. During the nineteenth century, before publication of The Interpretation of Dreams, the presence of dreaming was considered by the scientific community as a manifestation of mental activity during sleep. The state of sleep was given prominence as a factor accounting for the seeming lack of organization and meaning to the dream experience. Thus, the assumed relatively nonpsychological sleep state set the scientific stage for viewing the nature of the dream. Freud radically shifted the context. He recognized-as myth, folklore, and common sense had long understood-that dreams were also linked with the psychology of waking life. This shift in orientation has proved essential for our modern view of dreams and dreaming. Dreams are no longer dismissed as senseless notes hit at random on a piano keyboard by an untrained player. Dreams are now recognized as psychologically significant and meaningful expressions of the life of the dreamer, albeit expressed in disguised and concealed forms. (For a contrasting view, see AcFIIa ION_sYNTHESIS xxroTESis .) Contemporary Dream Research During the past quarter-century, there has been increasing scientific interest in the process of dreaming. A regular sleep-wakefulness cycle has been discovered, and if experimental subjects are awakened during periods of rapid eye movements (REM periods), they will frequently report dreams. In a typical night, four or five dreams occur during REM periods, accompanied by other signs of physiological activation, such as increased respiratory rate, heart rate, and penile and clitoral erection. Dreams usually last for the duration of the eye movements, from about 10 to 25 minutes. Although dreaming usually occurs in such regular cycles ;.dreaming may occur at other times during sleep, as well as during hypnagogic (falling asleep) or hypnopompic .(waking up) states, when REMs are not present. The above findings are discoveries made since the monumental work of Freud reported in The Interpretation of Dreams, and .although of great interest to the study of the mind-body problem, these .findings as yet bear only a peripheral relationship to the central concerns of the psychology of dream formation, the meaning of dream content, the dream as an approach to a deeper understanding of emotional life, and the use of the dream in psychoanalytic treatment .",
"title": ""
}
] | scidocsrr |
a5fac85a85177ff57a7cc5e8506bf308 | Causal Discovery from Subsampled Time Series Data by Constraint Optimization | [
{
"docid": "17deb6c21da616a73a6daedf971765c3",
"text": "Recent approaches to causal discovery based on Boolean satisfiability solvers have opened new opportunities to consider search spaces for causal models with both feedback cycles and unmeasured confounders. However, the available methods have so far not been able to provide a principled account of how to handle conflicting constraints that arise from statistical variability. Here we present a new approach that preserves the versatility of Boolean constraint solving and attains a high accuracy despite the presence of statistical errors. We develop a new logical encoding of (in)dependence constraints that is both well suited for the domain and allows for faster solving. We represent this encoding in Answer Set Programming (ASP), and apply a state-of-theart ASP solver for the optimization task. Based on different theoretical motivations, we explore a variety of methods to handle statistical errors. Our approach currently scales to cyclic latent variable models with up to seven observed variables and outperforms the available constraintbased methods in accuracy.",
"title": ""
}
] | [
{
"docid": "e78e70d347fb76a79755442cabe1fbe0",
"text": "Recent advances in neural variational inference have facilitated efficient training of powerful directed graphical models with continuous latent variables, such as variational autoencoders. However, these models usually assume simple, unimodal priors — such as the multivariate Gaussian distribution — yet many realworld data distributions are highly complex and multi-modal. Examples of complex and multi-modal distributions range from topics in newswire text to conversational dialogue responses. When such latent variable models are applied to these domains, the restriction of the simple, uni-modal prior hinders the overall expressivity of the learned model as it cannot possibly capture more complex aspects of the data distribution. To overcome this critical restriction, we propose a flexible, simple prior distribution which can be learned efficiently and potentially capture an exponential number of modes of a target distribution. We develop the multi-modal variational encoder-decoder framework and investigate the effectiveness of the proposed prior in several natural language processing modeling tasks, including document modeling and dialogue modeling.",
"title": ""
},
{
"docid": "c2558388fb20454fa6f4653b1e4ab676",
"text": "Recently, Convolutional Neural Network (CNN) based models have achieved great success in Single Image Super-Resolution (SISR). Owing to the strength of deep networks, these CNN models learn an effective nonlinear mapping from the low-resolution input image to the high-resolution target image, at the cost of requiring enormous parameters. This paper proposes a very deep CNN model (up to 52 convolutional layers) named Deep Recursive Residual Network (DRRN) that strives for deep yet concise networks. Specifically, residual learning is adopted, both in global and local manners, to mitigate the difficulty of training very deep networks, recursive learning is used to control the model parameters while increasing the depth. Extensive benchmark evaluation shows that DRRN significantly outperforms state of the art in SISR, while utilizing far fewer parameters. Code is available at https://github.com/tyshiwo/DRRN_CVPR17.",
"title": ""
},
{
"docid": "18f739a605222415afdea4f725201fba",
"text": "I discuss open theoretical questions pertaining to the modified dynamics (MOND)–a proposed alternative to dark matter, which posits a breakdown of Newtonian dynamics in the limit of small accelerations. In particular, I point the reasons for thinking that MOND is an effective theory–perhaps, despite appearance, not even in conflict with GR. I then contrast the two interpretations of MOND as modified gravity and as modified inertia. I describe two mechanical models that are described by potential theories similar to (non-relativistic) MOND: a potential-flow model, and a membrane model. These might shed some light on a possible origin of MOND. The possible involvement of vacuum effects is also speculated on.",
"title": ""
},
{
"docid": "c197e1ab49287fc571f2a99a9501bf84",
"text": "X-rays are commonly performed imaging tests that use small amounts of radiation to produce pictures of the organs, tissues, and bones of the body. X-rays of the chest are used to detect abnormalities or diseases of the airways, blood vessels, bones, heart, and lungs. In this work we present a stochastic attention-based model that is capable of learning what regions within a chest X-ray scan should be visually explored in order to conclude that the scan contains a specific radiological abnormality. The proposed model is a recurrent neural network (RNN) that learns to sequentially sample the entire X-ray and focus only on informative areas that are likely to contain the relevant information. We report on experiments carried out with more than 100, 000 X-rays containing enlarged hearts or medical devices. The model has been trained using reinforcement learning methods to learn task-specific policies.",
"title": ""
},
{
"docid": "ed0d1e110347313285a6b478ff8875e3",
"text": "Data mining is an area of computer science with a huge prospective, which is the process of discovering or extracting information from large database or datasets. There are many different areas under Data Mining and one of them is Classification or the supervised learning. Classification also can be implemented through a number of different approaches or algorithms. We have conducted the comparison between three algorithms with help of WEKA (The Waikato Environment for Knowledge Analysis), which is an open source software. It contains different type's data mining algorithms. This paper explains discussion of Decision tree, Bayesian Network and K-Nearest Neighbor algorithms. Here, for comparing the result, we have used as parameters the correctly classified instances, incorrectly classified instances, time taken, kappa statistic, relative absolute error, and root relative squared error.",
"title": ""
},
{
"docid": "45c04c80a5e4c852c4e84ba66bd420dd",
"text": "This paper addresses empirically and theoretically a question derived from the chunking theory of memory (Chase & Simon, 1973a, 1973b): To what extent is skilled chess memory limited by the size of short-term memory (about seven chunks)? This question is addressed first with an experiment where subjects, ranking from class A players to grandmasters, are asked to recall up to five positions presented during 5 s each. Results show a decline of percentage of recall with additional boards, but also show that expert players recall more pieces than is predicted by the chunking theory in its original form. A second experiment shows that longer latencies between the presentation of boards facilitate recall. In a third experiment, a Chessmaster gradually increases the number of boards he can reproduce with higher than 70% average accuracy to nine, replacing as many as 160 pieces correctly. To account for the results of these experiments, a revision of the Chase-Simon theory is proposed. It is suggested that chess players, like experts in other recall tasks, use long-term memory retrieval structures (Chase & Ericsson, 1982) or templates in addition to chunks in short-term memory to store information rapidly.",
"title": ""
},
{
"docid": "de70b208289bad1bc410bcb7a76e56df",
"text": "Instant Messaging chat sessions are realtime text-based conversations which can be analyzed using dialogue-act models. We describe a statistical approach for modelling and detecting dialogue acts in Instant Messaging dialogue. This involved the collection of a small set of task-based dialogues and annotating them with a revised tag set. We then dealt with segmentation and synchronisation issues which do not arise in spoken dialogue. The model we developed combines naive Bayes and dialogue-act n-grams to obtain better than 80% accuracy in our tagging experiment.",
"title": ""
},
{
"docid": "530906b8827394b2dde40ae98d050b7b",
"text": "The aim of transfer learning is to improve prediction accuracy on a target task by exploiting the training examples for tasks that are related to the target one. Transfer learning has received more attention in recent years, because this technique is considered to be helpful in reducing the cost of labeling. In this paper, we propose a very simple approach to transfer learning: TrBagg, which is the extension of bagging. TrBagg is composed of two stages: Many weak classifiers are first generated as in standard bagging, and these classifiers are then filtered based on their usefulness for the target task. This simplicity makes it easy to work reasonably well without severe tuning of learning parameters. Further, our algorithm equips an algorithmic scheme to avoid negative transfer. We applied TrBagg to personalized tag prediction tasks for social bookmarks Our approach has several convenient characteristics for this task such as adaptation to multiple tasks with low computational cost.",
"title": ""
},
{
"docid": "06e50887ddec8b0e858173499ce2ee11",
"text": "Over the last few years, we've seen a plethora of Internet of Things (IoT) solutions, products, and services make their way into the industry's marketplace. All such solutions will capture large amounts of data pertaining to the environment as well as their users. The IoT's objective is to learn more and better serve system users. Some IoT solutions might store data locally on devices (\"things\"), whereas others might store it in the cloud. The real value of collecting data comes through data processing and aggregation on a large scale, where new knowledge can be extracted. However, such procedures can lead to user privacy issues. This article discusses some of the main challenges of privacy in the IoT as well as opportunities for research and innovation. The authors also introduce some of the ongoing research efforts that address IoT privacy issues.",
"title": ""
},
{
"docid": "b42b17131236abc1ee3066905025aa8c",
"text": "The planet Mars, while cold and arid today, once possessed a warm and wet climate, as evidenced by extensive fluvial features observable on its surface. It is believed that the warm climate of the primitive Mars was created by a strong greenhouse effect caused by a thick CO2 atmosphere. Mars lost its warm climate when most of the available volatile CO2 was fixed into the form of carbonate rock due to the action of cycling water. It is believed, however, that sufficient CO2 to form a 300 to 600 mb atmosphere may still exist in volatile form, either adsorbed into the regolith or frozen out at the south pole. This CO2 may be released by planetary warming, and as the CO2 atmosphere thickens, positive feedback is produced which can accelerate the warming trend. Thus it is conceivable, that by taking advantage of the positive feedback inherent in Mars' atmosphere/regolith CO2 system, that engineering efforts can produce drastic changes in climate and pressure on a planetary scale. In this paper we propose a mathematical model of the Martian CO2 system, and use it to produce analysis which clarifies the potential of positive feedback to accelerate planetary engineering efforts. It is shown that by taking advantage of the feedback, the requirements for planetary engineering can be reduced by about 2 orders of magnitude relative to previous estimates. We examine the potential of various schemes for producing the initial warming to drive the process, including the stationing of orbiting mirrors, the importation of natural volatiles with high greenhouse capacity from the outer solar system, and the production of artificial halocarbon greenhouse gases on the Martian surface through in-situ industry. If the orbital mirror scheme is adopted, mirrors with dimension on the order or 100 km radius are required to vaporize the CO2 in the south polar cap. If manufactured of solar sail like material, such mirrors would have a mass on the order of 200,000 tonnes. If manufactured in space out of asteroidal or Martian moon material, about 120 MWe-years of energy would be needed to produce the required aluminum. This amount of power can be provided by near-term multimegawatt nuclear power units, such as the 5 MWe modules now under consideration for NEP spacecraft. Orbital transfer of very massive bodies from the outer solar system can be accomplished using nuclear thermal rocket engines using the asteroid's volatile material as propellant. Using major planets for gravity assists, the rocket ∆V required to move an outer solar system asteroid onto a collision trajectory with Mars can be as little as 300 m/s. If the asteroid is made of NH3, specific impulses of about 400 s can be attained, and as little as 10% of the asteroid will be required for propellant. Four 5000 MWt NTR engines would require a 10 year burn time to push a 10 billion tonne asteroid through a ∆V of 300 m/s. About 4 such objects would be sufficient to greenhouse Mars. Greenhousing Mars via the manufacture of halocarbon gases on the planet's surface may well be the most practical option. Total surface power requirements to drive planetary warming using this method are calculated and found to be on the order of 1000 MWe, and the required times scale for climate and atmosphere modification is on the order of 50 years. It is concluded that a drastic modification of Martian conditions can be achieved using 21st century technology. The Mars so produced will closely resemble the conditions existing on the primitive Mars. Humans operating on the surface of such a Mars would require breathing gear, but pressure suits would be unnecessary. With outside atmospheric pressures raised, it will be possible to create large dwelling areas by means of very large inflatable structures. Average temperatures could be above the freezing point of water for significant regions during portions of the year, enabling the growth of plant life in the open. The spread of plants could produce enough oxygen to make Mars habitable for animals in several millennia. More rapid oxygenation would require engineering efforts supported by multi-terrawatt power sources. It is speculated that the desire to speed the terraforming of Mars will be a driver for developing such technologies, which in turn will define a leap in human power over nature as dramatic as that which accompanied the creation of post-Renaissance industrial civilization.",
"title": ""
},
{
"docid": "85908a576c13755e792d52d02947f8b3",
"text": "Quick Response Code has been widely used in the automatic identification fields. In order to adapting various sizes, a little dirty or damaged, and various lighting conditions of bar code image, this paper proposes a novel implementation of real-time Quick Response Code recognition using mobile, which is an efficient technology used for data transferring. An image processing system based on mobile is described to be able to binarize, locate, segment, and decode the QR Code. Our experimental results indicate that these algorithms are robust to real world scene image.",
"title": ""
},
{
"docid": "e9474d646b9da5e611475f4cdfdfc30e",
"text": "Wearable medical sensors (WMSs) are garnering ever-increasing attention from both the scientific community and the industry. Driven by technological advances in sensing, wireless communication, and machine learning, WMS-based systems have begun transforming our daily lives. Although WMSs were initially developed to enable low-cost solutions for continuous health monitoring, the applications of WMS-based systems now range far beyond health care. Several research efforts have proposed the use of such systems in diverse application domains, e.g., education, human-computer interaction, and security. Even though the number of such research studies has grown drastically in the last few years, the potential challenges associated with their design, development, and implementation are neither well-studied nor well-recognized. This article discusses various services, applications, and systems that have been developed based on WMSs and sheds light on their design goals and challenges. We first provide a brief history of WMSs and discuss how their market is growing. We then discuss the scope of applications of WMS-based systems. Next, we describe the architecture of a typical WMS-based system and the components that constitute such a system, and their limitations. Thereafter, we suggest a list of desirable design goals that WMS-based systems should satisfy. Finally, we discuss various research directions related to WMSs and how previous research studies have attempted to address the limitations of the components used in WMS-based systems and satisfy the desirable design goals.",
"title": ""
},
{
"docid": "0fcd04f5dccf595d2c08cff23168ee5e",
"text": "PubChem (http://pubchem.ncbi.nlm.nih.gov) is a public repository for biological properties of small molecules hosted by the US National Institutes of Health (NIH). PubChem BioAssay database currently contains biological test results for more than 700 000 compounds. The goal of PubChem is to make this information easily accessible to biomedical researchers. In this work, we present a set of web servers to facilitate and optimize the utility of biological activity information within PubChem. These web-based services provide tools for rapid data retrieval, integration and comparison of biological screening results, exploratory structure-activity analysis, and target selectivity examination. This article reviews these bioactivity analysis tools and discusses their uses. Most of the tools described in this work can be directly accessed at http://pubchem.ncbi.nlm.nih.gov/assay/. URLs for accessing other tools described in this work are specified individually.",
"title": ""
},
{
"docid": "b4e676d4d11039c5c5feb5e549eb364f",
"text": "Abst ract Qualit at ive case st udy met hodology provides t ools f or researchers t o st udy complex phenomena wit hin t heir cont ext s. When t he approach is applied correct ly, it becomes a valuable met hod f or healt h science research t o develop t heory, evaluat e programs, and develop int ervent ions. T he purpose of t his paper is t o guide t he novice researcher in ident if ying t he key element s f or designing and implement ing qualit at ive case st udy research project s. An overview of t he t ypes of case st udy designs is provided along wit h general recommendat ions f or writ ing t he research quest ions, developing proposit ions, det ermining t he “case” under st udy, binding t he case and a discussion of dat a sources and t riangulat ion. T o f acilit at e applicat ion of t hese principles, clear examples of research quest ions, st udy proposit ions and t he dif f erent t ypes of case st udy designs are provided Keywo rds Case St udy and Qualit at ive Met hod Publicat io n Dat e 12-1-2008 Creat ive Co mmo ns License Journal Home About T his Journal Aims & Scope Edit orial Board Policies Open Access",
"title": ""
},
{
"docid": "eb4cac4ac288bc65df70f906b674ceb5",
"text": "LPWAN (Low Power Wide Area Networks) technologies have been attracting attention continuously in IoT (Internet of Things). LoRaWAN is present on the market as a LPWAN technology and it has features such as low power consumption, low transceiver chip cost and wide coverage area. In the LoRaWAN, end devices must perform a join procedure for participating in the network. Attackers could exploit the join procedure because it has vulnerability in terms of security. Replay attack is a method of exploiting the vulnerability in the join procedure. In this paper, we propose a attack scenario and a countermeasure against replay attack that may occur in the join request transfer process.",
"title": ""
},
{
"docid": "6724af38a637d61ccc2a4ad8119c6e1a",
"text": "INTRODUCTION Pivotal to athletic performance is the ability to more maintain desired athletic performance levels during particularly critical periods of competition [1], such as during pressurised situations that typically evoke elevated levels of anxiety (e.g., penalty kicks) or when exposed to unexpected adversities (e.g., unfavourable umpire calls on crucial points) [2, 3]. These kinds of situations become markedly important when athletes, who are separated by marginal physical and technical differences, are engaged in closely contested matches, games, or races [4]. It is within these competitive conditions, in particular, that athletes’ responses define their degree of success (or lack thereof); responses that are largely dependent on athletes’ psychological attributes [5]. One of these attributes appears to be mental toughness (MT), which has often been classified as a critical success factor due to the role it plays in fostering adaptive responses to positively and negatively construed pressures, situations, and events [6 8]. However, as scholars have intensified",
"title": ""
},
{
"docid": "ff8c3ce63b340a682e99540313be7fe7",
"text": "Detecting and identifying any phishing websites in real-time, particularly for e-banking is really a complex and dynamic problem involving many factors and criteria. Because of the subjective considerations and the ambiguities involved in the detection, Fuzzy Data Mining (DM) Techniques can be an effective tool in assessing and identifying phishing websites for e-banking since it offers a more natural way of dealing with quality factors rather than exact values. In this paper, we present novel approach to overcome the ‘fuzziness’ in the e-banking phishing website assessment and propose an intelligent resilient and effective model for detecting e-banking phishing websites. The proposed model is based on Fuzzy logic (FL) combined with Data Mining algorithms to characterize the e-banking phishing website factors and to investigate its techniques by classifying there phishing types and defining six e-banking phishing website attack criteria’s with a layer structure. The proposed e-banking phishing website model showed the significance importance of the phishing website two criteria’s (URL & Domain Identity) and (Security & Encryption) in the final phishing detection rate result, taking into consideration its characteristic association and relationship with each others as showed from the fuzzy data mining classification and association rule algorithms. Our phishing model also showed the insignificant trivial influence of the (Page Style & Content) criteria along with (Social Human Factor) criteria in the phishing detection final rate result.",
"title": ""
},
{
"docid": "27c7afd468d969509eec2b2a3260a679",
"text": "The impact of predictive genetic testing on cancer care can be measured by the increased demand for and utilization of genetic services as well as in the progress made in reducing cancer risks in known mutation carriers. Nonetheless, differential access to and utilization of genetic counseling and cancer predisposition testing among underserved racial and ethnic minorities compared with the white population has led to growing health care disparities in clinical cancer genetics that are only beginning to be addressed. Furthermore, deficiencies in the utility of genetic testing in underserved populations as a result of limited testing experience and in the effectiveness of risk-reducing interventions compound access and knowledge-base disparities. The recent literature on racial/ethnic health care disparities is briefly reviewed, and is followed by a discussion of the current limitations of risk assessment and genetic testing outside of white populations. The importance of expanded testing in underserved populations is emphasized.",
"title": ""
},
{
"docid": "788bf97b435dfbe9d31373e21bc76716",
"text": "In this paper, we study the design and workspace of a 6–6 cable-suspended parallel robot. The workspace volume is characterized as the set of points where the centroid of the moving platform can reach with tensions in all suspension cables at a constant orientation. This paper attempts to tackle some aspects of optimal design of a 6DOF cable robot by addressing the variations of the workspace volume and the accuracy of the robot using different geometric configurations, different sizes and orientations of the moving platform. The global condition index is used as a performance index of a robot with respect to the force and velocity transmission over the whole workspace. The results are used for design analysis of the cable-robot for a specific motion of the moving platform. 2004 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "8255146164ff42f8755d8e74fd24cfa1",
"text": "We present a named-entity recognition (NER) system for parallel multilingual text. Our system handles three languages (i.e., English, French, and Spanish) and is tailored to the biomedical domain. For each language, we design a supervised knowledge-based CRF model with rich biomedical and general domain information. We use the sentence alignment of the parallel corpora, the word alignment generated by the GIZA++[8] tool, and Wikipedia-based word alignment in order to transfer system predictions made by individual language models to the remaining parallel languages. We re-train each individual language system using the transferred predictions and generate a final enriched NER model for each language. The enriched system performs better than the initial system based on the predictions transferred from the other language systems. Each language model benefits from the external knowledge extracted from biomedical and general domain resources.",
"title": ""
}
] | scidocsrr |
708a0d082f133d01b236fd86ff4c9732 | CBCD: Cloned buggy code detector | [
{
"docid": "3cd67f617b3a68844e9766d6f670f6ef",
"text": "Software security vulnerabilities are discovered on an almost daily basis and have caused substantial damage. Aiming at supporting early detection and resolution for them, we have conducted an empirical study on thousands of vulnerabilities and found that many of them are recurring due to software reuse. Based on the knowledge gained from the study, we developed SecureSync, an automatic tool to detect recurring software vulnerabilities on the systems that reuse source code or libraries. The core of SecureSync includes two techniques to represent and compute the similarity of vulnerable code across different systems. The evaluation for 60 vulnerabilities on 176 releases of 119 open-source software systems shows that SecureSync is able to detect recurring vulnerabilities with high accuracy and to identify 90 releases having potentially vulnerable code that are not reported or fixed yet, even in mature systems. A couple of cases were actually confirmed by their developers.",
"title": ""
}
] | [
{
"docid": "a0fa0ea42201d552e9d7c750d9e3450d",
"text": "With the proliferation of computing and information technologies, we have an opportunity to envision a fully participatory democracy in the country through a fully digitized voting platform. However, the growing interconnectivity of systems and people across the globe, and the proliferation of cybersecurity issues pose a significant bottleneck towards achieving such a vision. In this paper, we discuss a vision to modernize our voting processes and discuss the challenges for creating a national e-voting framework that incorporates policies, standards and technological infrastructure that is secure, privacy-preserving, resilient and transparent. Through partnerships among private industry, academia, and State and Federal Government, technology must be the catalyst to develop a national platform for American voters. Along with integrating biometrics to authenticate each registered voter for transparency and accountability, the platform provides depth in the e-voting infrastructure with emerging blockchain technologies. We outline the way voting process runs today with the challenges; states are having from funding to software development concerns. Additionally, we highlight attacks from malware infiltrations from off the shelf products made from factories from countries such as China. This paper illustrates a strategic level of voting challenges and modernizing processes that will enhance the voter’s trust in America democracy.",
"title": ""
},
{
"docid": "104cf54cfa4bc540b17176593cdb77d8",
"text": "Nonlinear manifold learning from unorganized data points is a very challenging unsupervised learning and data visualization problem with a great variety of applications. In this paper we present a new algorithm for manifold learning and nonlinear dimension reduction. Based on a set of unorganized data points sampled with noise from the manifold, we represent the local geometry of the manifold using tangent spaces learned by fitting an affine subspace in a neighborhood of each data point. Those tangent spaces are aligned to give the internal global coordinates of the data points with respect to the underlying manifold by way of a partial eigendecomposition of the neighborhood connection matrix. We present a careful error analysis of our algorithm and show that the reconstruction errors are of second-order accuracy. We illustrate our algorithm using curves and surfaces both in 2D/3D and higher dimensional Euclidean spaces, and 64-by-64 pixel face images with various pose and lighting conditions. We also address several theoretical and algorithmic issues for further research and improvements.",
"title": ""
},
{
"docid": "9af928f8d620630cfd2938905adeb930",
"text": "This paper describes the application of a pedagogical model called \\learning as a research activity\" [D. Gil-P erez and J. Carrascosa-Alis, Science Education 78 (1994) 301{315] to the design and implementation of a two-semester course on compiler design for Computer Engineering students. In the new model, the classical pattern of classroom activity based mainly on one-way knowledge transmission/reception of pre-elaborated concepts is replaced by an active working environment that resembles that of a group of novice researchers under the supervision of an expert. The new model, rooted in the now commonly-accepted constructivist postulates, strives for meaningful acquisition of fundamental concepts through problem solving |in close parallelism to the construction of scienti c knowledge through history.",
"title": ""
},
{
"docid": "9180fe4fc7020bee9a52aa13de3adf54",
"text": "A new Depth Image Layers Separation (DILS) algorithm for synthesizing inter-view images based on disparity depth map layers representation is presented. The approach is to separate the depth map into several layers identified through histogram-based clustering. Each layer is extracted using inter-view interpolation to create objects based on location and depth. DILS is a new paradigm in selecting interesting image locations based on depth, but also in producing new image representations that allow objects or parts of an image to be described without the need of segmentation and identification. The image view synthesis can reduce the configuration complexity of multi-camera arrays in 3D imagery and free-viewpoint applications. The simulation results show that depth layer separation is able to create inter-view images that may be integrated with other techniques such as occlusion handling processes. The DILS algorithm can be implemented using both simple as well as sophisticated stereo matching methods to synthesize inter-view images.",
"title": ""
},
{
"docid": "8cfb150c71b310cf89bb5ded86ec7684",
"text": "This article argues that technological innovation is transforming the flow of information, the fluidity of social action, and is giving birth to new forms of bottom up innovation that are capable of expanding and exploding old theories of reproduction and resistance because 'smart mobs', 'street knowledge', and 'social movements' cannot be neutralized by powerful structural forces in the same old ways. The purpose of this article is to develop the concept of YPAR 2.0 in which new technologies enable young people to visualize, validate, and transform social inequalities by using local knowledge in innovative ways that deepen civic engagement, democratize data, expand educational opportunity, inform policy, and mobilize community assets. Specifically this article documents how digital technology (including a mobile, mapping and SMS platform called Streetwyze and paper-mapping tool Local Ground) - coupled with 'ground-truthing' - an approach in which community members work with researchers to collect and verify 'public' data - sparked a food revolution in East Oakland that led to an increase in young people's self-esteem, environmental stewardship, academic engagement, and positioned urban youth to become community leaders and community builders who are connected and committed to health and well-being of their neighborhoods. This article provides an overview of how the YPAR 2.0 Model was developed along with recommendations and implications for future research and collaborations between youth, teachers, neighborhood leaders, and youth serving organizations.",
"title": ""
},
{
"docid": "5f8a5ea87859bf80cb630b0f3734d4cb",
"text": "Existing Natural Language Generation (nlg) systems are weak AI systems and exhibit limited capabilities when language generation tasks demand higher levels of creativity, originality and brevity. Eective solutions or, at least evaluations of modern nlg paradigms for such creative tasks have been elusive, unfortunately. is paper introduces and addresses the task of coherent story generation from independent descriptions, describing a scene or an event. Towards this, we explore along two popular text-generation paradigms – (1) Statistical Machine Translation (smt), posing story generation as a translation problem and (2) Deep Learning, posing story generation as a sequence to sequence learning problem. In SMT, we chose two popular methods such as phrase based SMT (pb-SMT) and syntax based SMT (syntax-SMT) to ‘translate’ the incoherent input text into stories. We then implement a deep recurrent neural network (rnn) architecture that encodes sequence of variable length input descriptions to corresponding latent representations and decodes them to produce well formed comprehensive story like summaries. e ecacy of the suggested approaches is demonstrated on a publicly available dataset with the help of popular machine translation and summarization evaluation metrics. We believe, a system like ours has dierent interesting applicationsfor example, creating news articles from phrases of event information.",
"title": ""
},
{
"docid": "b3cdd76dd50bea401ede3bb945c377dc",
"text": "First we report on a new threat campaign, underway in Korea, which infected around 20,000 Android users within two months. The campaign attacked mobile users with malicious applications spread via different channels, such as email attachments or SMS spam. A detailed investigation of the Android malware resulted in the identification of a new Android malware family Android/BadAccents. The family represents current state-of-the-art in mobile malware development for banking trojans. Second, we describe in detail the techniques this malware family uses and confront them with current state-of-the-art static and dynamic codeanalysis techniques for Android applications. We highlight various challenges for automatic malware analysis frameworks that significantly hinder the fully automatic detection of malicious components in current Android malware. Furthermore, the malware exploits a previously unknown tapjacking vulnerability in the Android operating system, which we describe. As a result of this work, the vulnerability, affecting all Android versions, will be patched in one of the next releases of the Android Open Source Project.",
"title": ""
},
{
"docid": "aa9450cdbdb1162015b4d931c32010fb",
"text": "The design of a low-cost rectenna for low-power applications is presented. The rectenna is designed with the use of analytical models and closed-form analytical expressions. This allows for a fast design of the rectenna system. To acquire a small-area rectenna, a layered design is proposed. Measurements indicate the validity range of the analytical models.",
"title": ""
},
{
"docid": "01fac1331705dcda8ce14b0145854294",
"text": "This meta-analysis evaluated predictors of both objective and subjective sales performance. Biodata measures and sales ability inventories were good predictors of the ratings criterion, with corrected rs of .52 and .45, respectively. Potency (a subdimension of the Big 5 personality dimension Extraversion) predicted supervisor ratings of performance (r = .28) and objective measures of sales (r — .26). Achievement (a component of the Conscientiousness dimension) predicted ratings (r = .25) and objective sales (r = .41). General cognitive ability showed a correlation of .40 with ratings but only .04 with objective sales. Similarly, age predicted ratings (r = .26) but not objective sales (r = —.06). On the basis of a small number of studies, interest appears to be a promising predictor of sales success.",
"title": ""
},
{
"docid": "cc3f821bd9617d31a8b303c4982e605f",
"text": "Body composition in older adults can be assessed using simple, convenient but less precise anthropometric methods to assess (regional) body fat and skeletal muscle, or more elaborate, precise and costly methods such as computed tomography and magnetic resonance imaging. Body weight and body fat percentage generally increase with aging due to an accumulation of body fat and a decline in skeletal muscle mass. Body weight and fatness plateau at age 75–80 years, followed by a gradual decline. However, individual weight patterns may differ and the periods of weight loss and weight (re)gain common in old age may affect body composition. Body fat redistributes with aging, with decreasing subcutaneous and appendicular fat and increasing visceral and ectopic fat. Skeletal muscle mass declines with aging, a process called sarcopenia. Obesity in old age is associated with a higher risk of mobility limitations, disability and mortality. A higher waist circumference and more visceral fat increase these risks, independent of overall body fatness, as do involuntary weight loss and weight cycling. The role of low skeletal muscle mass in the development of mobility limitations and disability remains controversial, but it is much smaller than the role of high body fat. Low muscle mass does not seem to increase mortality risk in older adults.",
"title": ""
},
{
"docid": "f21850cde63b844e95db5b9916db1c30",
"text": "Foreign Exchange (Forex) market is a complex and challenging task for prediction due to uncertainty movement of exchange rate. However, these movements over timeframe also known as historical Forex data that offered a generic repeated trend patterns. This paper uses the features extracted from trend patterns to model and predict the next day trend. Hidden Markov Models (HMMs) is applied to learn the historical trend patterns, and use to predict the next day movement trends. We use the 2011 Forex historical data of Australian Dollar (AUS) and European Union Dollar (EUD) against the United State Dollar (USD) for modeling, and the 2012 and 2013 Forex historical data for validating the proposed model. The experimental results show outperforms prediction result for both years.",
"title": ""
},
{
"docid": "cd31be485b4b914508a5a9e7c5445459",
"text": "Deep learning has become increasingly popular in both academic and industrial areas in the past years. Various domains including pattern recognition, computer vision, and natural language processing have witnessed the great power of deep networks. However, current studies on deep learning mainly focus on data sets with balanced class labels, while its performance on imbalanced data is not well examined. Imbalanced data sets exist widely in real world and they have been providing great challenges for classification tasks. In this paper, we focus on the problem of classification using deep network on imbalanced data sets. Specifically, a novel loss function called mean false error together with its improved version mean squared false error are proposed for the training of deep networks on imbalanced data sets. The proposed method can effectively capture classification errors from both majority class and minority class equally. Experiments and comparisons demonstrate the superiority of the proposed approach compared with conventional methods in classifying imbalanced data sets on deep neural networks.",
"title": ""
},
{
"docid": "7c1fd4f8978e012ed00249271ed8c0cf",
"text": "Graph clustering aims to discovercommunity structures in networks, the task being fundamentally challenging mainly because the topology structure and the content of the graphs are difficult to represent for clustering analysis. Recently, graph clustering has moved from traditional shallow methods to deep learning approaches, thanks to the unique feature representation learning capability of deep learning. However, existing deep approaches for graph clustering can only exploit the structure information, while ignoring the content information associated with the nodes in a graph. In this paper, we propose a novel marginalized graph autoencoder (MGAE) algorithm for graph clustering. The key innovation of MGAE is that it advances the autoencoder to the graph domain, so graph representation learning can be carried out not only in a purely unsupervised setting by leveraging structure and content information, it can also be stacked in a deep fashion to learn effective representation. From a technical viewpoint, we propose a marginalized graph convolutional network to corrupt network node content, allowing node content to interact with network features, and marginalizes the corrupted features in a graph autoencoder context to learn graph feature representations. The learned features are fed into the spectral clustering algorithm for graph clustering. Experimental results on benchmark datasets demonstrate the superior performance of MGAE, compared to numerous baselines.",
"title": ""
},
{
"docid": "6701b0ad4c53a57984504c4465bf1364",
"text": "In the aftermath of recent corporate scandals, managers and researchers have turned their attention to questions of ethics management. We identify five common myths about business ethics and provide responses that are grounded in theory, research, and business examples. Although the scientific study of business ethics is relatively new, theory and research exist that can guide executives who are trying to better manage their employees' and their own ethical behavior. We recommend that ethical conduct be managed proactively via explicit ethical leadership and conscious management of the organization's ethical culture.",
"title": ""
},
{
"docid": "3621dd85dc4ba3007cfa8ec1017b4e96",
"text": "The current lack of knowledge about the effect of maternally administered drugs on the developing fetus is a major public health concern worldwide. The first critical step toward predicting the safety of medications in pregnancy is to screen drug compounds for their ability to cross the placenta. However, this type of preclinical study has been hampered by the limited capacity of existing in vitro and ex vivo models to mimic physiological drug transport across the maternal-fetal interface in the human placenta. Here the proof-of-principle for utilizing a microengineered model of the human placental barrier to simulate and investigate drug transfer from the maternal to the fetal circulation is demonstrated. Using the gestational diabetes drug glyburide as a model compound, it is shown that the microphysiological system is capable of reconstituting efflux transporter-mediated active transport function of the human placental barrier to limit fetal exposure to maternally administered drugs. The data provide evidence that the placenta-on-a-chip may serve as a new screening platform to enable more accurate prediction of drug transport in the human placenta.",
"title": ""
},
{
"docid": "17a11a48d3ee024b8a606caf2c028986",
"text": "For evaluating or training different kinds of vision algorithms, a large amount of precise and reliable data is needed. In this paper we present a system to create extended synthetic sequences of traffic environment scenarios, associated with several types of ground truth data. By integrating vehicle dynamics in a configuration tool, and by using path-tracing in an external rendering engine to render the scenes, a system is created that allows ongoing and flexible creation of highly realistic traffic images. For all images, ground truth data is provided for depth, optical flow, surface normals and semantic scene labeling. Sequences that are produced with this system are more varied and closer to natural images than other synthetic datasets before.",
"title": ""
},
{
"docid": "db849661cd9f748b05183cb39e36383e",
"text": "Generative adversarial networks (GANs) implicitly learn the probability distribution of a dataset and can draw samples from the distribution. This paper presents, Tabular GAN (TGAN), a generative adversarial network which can generate tabular data like medical or educational records. Using the power of deep neural networks, TGAN generates high-quality and fully synthetic tables while simultaneously generating discrete and continuous variables. When we evaluate our model on three datasets, we find that TGAN outperforms conventional statistical generative models in both capturing the correlation between columns and scaling up for large datasets.",
"title": ""
},
{
"docid": "587f58f291732bfb8954e34564ba76fd",
"text": "Blood pressure oscillometric waveforms behave as amplitude modulated nonlinear signals with frequency fluctuations. Their oscillating nature can be better analyzed by the digital Taylor-Fourier transform (DTFT), recently proposed for phasor estimation in oscillating power systems. Based on a relaxed signal model that includes Taylor components greater than zero, the DTFT is able to estimate not only the oscillation itself, as does the digital Fourier transform (DFT), but also its derivatives included in the signal model. In this paper, an oscillometric waveform is analyzed with the DTFT, and its zeroth and first oscillating harmonics are illustrated. The results show that the breathing activity can be separated from the cardiac one through the critical points of the first component, determined by the zero crossings of the amplitude derivatives estimated from the third Taylor order model. On the other hand, phase derivative estimates provide the fluctuations of the cardiac frequency and its derivative, new parameters that could improve the precision of the systolic and diastolic blood pressure assignment. The DTFT envelope estimates uniformly converge from K=3, substantially improving the harmonic separation of the DFT.",
"title": ""
},
{
"docid": "a52d2a2c8fdff0bef64edc1a97b89c63",
"text": "This paper provides a review of recent developments in speech recognition research. The concept of sources of knowledge is introduced and the use of knowledge to generate and verify hypotheses is discussed. The difficulties that arise in the construction of different types of speech recognition systems are discussed and the structure and performance of several such systems is presented. Aspects of component subsystems at the acoustic, phonetic, syntactic, and semantic levels are presented. System organizations that are required for effective interaction and use of various component subsystems in the presence of error and ambiguity are discussed.",
"title": ""
}
] | scidocsrr |
976d92080eeeba1720e4a263f7f45c66 | Power grid's Intelligent Stability Analysis based on big data technology | [
{
"docid": "56785d7f01cb2e1ab8754cbb931a9d0b",
"text": "This paper describes an online dynamic security assessment scheme for large-scale interconnected power systems using phasor measurements and decision trees. The scheme builds and periodically updates decision trees offline to decide critical attributes as security indicators. Decision trees provide online security assessment and preventive control guidelines based on real-time measurements of the indicators from phasor measurement units. The scheme uses a new classification method involving each whole path of a decision tree instead of only classification results at terminal nodes to provide more reliable security assessment results for changes in system conditions. The approaches developed are tested on a 2100-bus, 2600-line, 240-generator operational model of the Entergy system. The test results demonstrate that the proposed scheme is able to identify key security indicators and give reliable and accurate online dynamic security predictions.",
"title": ""
}
] | [
{
"docid": "848fbbcf6e679191fd4160db5650ef65",
"text": "The capturing of angular and spatial information of the scene using single camera is made possible by new emerging technology referred to as plenoptic camera. Both angular and spatial information, enable various post-processing applications, e.g. refocusing, synthetic aperture, super-resolution, and 3D scene reconstruction. In the past, multiple traditional cameras were used to capture the angular and spatial information of the scene. However, recently with the advancement in optical technology, plenoptic cameras have been introduced to capture the scene information. In a plenoptic camera, a lenslet array is placed between the main lens and the image sensor that allows multiplexing of the spatial and angular information onto a single image, also referred to as plenoptic image. The placement of the lenslet array relative to the main lens and the image sensor, results in two different optical designs of a plenoptic camera, also referred to as plenoptic 1.0 and plenoptic 2.0. In this work, we present a novel dataset captured with plenoptic 1.0 (Lytro Illum) and plenoptic 2.0 (Raytrix R29) cameras for the same scenes under the same conditions. The dataset provides the benchmark contents for various research and development activities for plenoptic images.",
"title": ""
},
{
"docid": "487c011cb0701b4b909dedca2d128fe6",
"text": "It is necessary and essential to discovery protein function from the novel primary sequences. Wet lab experimental procedures are not only time-consuming, but also costly, so predicting protein structure and function reliably based only on amino acid sequence has significant value. TATA-binding protein (TBP) is a kind of DNA binding protein, which plays a key role in the transcription regulation. Our study proposed an automatic approach for identifying TATA-binding proteins efficiently, accurately, and conveniently. This method would guide for the special protein identification with computational intelligence strategies. Firstly, we proposed novel fingerprint features for TBP based on pseudo amino acid composition, physicochemical properties, and secondary structure. Secondly, hierarchical features dimensionality reduction strategies were employed to improve the performance furthermore. Currently, Pretata achieves 92.92% TATA-binding protein prediction accuracy, which is better than all other existing methods. The experiments demonstrate that our method could greatly improve the prediction accuracy and speed, thus allowing large-scale NGS data prediction to be practical. A web server is developed to facilitate the other researchers, which can be accessed at http://server.malab.cn/preTata/ .",
"title": ""
},
{
"docid": "0669dc3c9867752cf88e6b46ce73e07d",
"text": "In this paper, we focus on the problem of preserving the privacy of sensitive relationships in graph data. We refer to the problem of inferring sensitive relationships from anonymized graph data as link re-identification. We propose five different privacy preservation strategies, which vary in terms of the amount of data removed (and hence their utility) and the amount of privacy preserved. We assume the adversary has an accurate predictive model for links, and we show experimentally the success of different link re-identification strategies under varying structural characteristics of the data.",
"title": ""
},
{
"docid": "6745e91294ae763f1f7ad7790bc9ccb4",
"text": "In this paper we propose an asymmetric semantic similarity among instances within an ontology. We aim to define a measurement of semantic similarity that exploit as much as possible the knowledge stored in the ontology taking into account different hints hidden in the ontology definition. The proposed similarity measurement considers different existing similarities, which we have combined and extended. Moreover, the similarity assessment is explicitly parameterised according to the criteria induced by the context. The parameterisation aims to assist the user in the decision making pertaining to similarity evaluation, as the criteria can be refined according to user needs. Experiments and an evaluation of the similarity assessment are presented showing the efficiency of the method.",
"title": ""
},
{
"docid": "fa1025c86ce9fce67ee148b7a37975da",
"text": "Context-aware Web services are emerging as a promising technology for the electronic businesses in mobile and pervasive environments. Unfortunately, complex context-aware services are still hard to build. In this paper, we present a modeling language for the model-driven development of context-aware Web services based on the Unified Modeling Language (UML). Specifically, we show how UML can be used to specify information related to the design of context-aware services. We present the abstract syntax and notation of the language and illustrate its usage using an example service. Our language offers significant design flexibility that considerably simplifies the development of context-aware Web services.",
"title": ""
},
{
"docid": "372f137098bd5817896d82ed0cb0c771",
"text": "Under today's bursty web traffic, the fine-grained per-container control promises more efficient resource provisioning for web services and better resource utilization in cloud datacenters. In this paper, we present Two-stage Stochastic Programming Resource A llocator (2SPRA). It optimizes resource provisioning for containerized n-tier web services in accordance with fluctuations of incoming workload to accommodate predefined SLOs on response latency. In particular, 2SPRA is capable of minimizing resource over-provisioning by addressing dynamics of web traffic as workload uncertainty in a native stochastic optimization model. Using special-purpose OpenOpt optimization framework, we fully implement 2SPRA in Python and evaluate it against three other existing allocation schemes, in a Docker-based CoreOS Linux VMs on Amazon EC2. We generate workloads based on four real-world web traces of various traffic variations: AOL, WorldCup98, ClarkNet, and NASA. Our experimental results demonstrate that 2SPRA achieves the minimum resource over-provisioning outperforming other schemes. In particular, 2SPRA allocates only 6.16 percent more than application's actual demand on average and at most 7.75 percent in the worst case. It achieves 3x further reduction in total resources provisioned compared to other schemes delivering overall cost-savings of 53.6 percent on average and up to 66.8 percent. Furthermore, 2SPRA demonstrates consistency in its provisioning decisions and robust responsiveness against workload fluctuations.",
"title": ""
},
{
"docid": "78b453d487294121a14e71e639906c36",
"text": "Modern mobile devices provide several functionalities and new ones are being added at a breakneck pace. Unfortunately browsing the menu and accessing the functions of a mobile phone is not a trivial task for visual impaired users. Low vision people typically rely on screen readers and voice commands. However, depending on the situations, screen readers are not ideal because blind people may need their hearing for safety, and automatic recognition of voice commands is challenging in noisy environments. Novel smart watches technologies provides an interesting opportunity to design new forms of user interaction with mobile phones. We present our first works towards the realization of a system, based on the combination of a mobile phone and a smart watch for gesture control, for assisting low vision people during daily life activities. More specifically we propose a novel approach for gesture recognition which is based on global alignment kernels and is shown to be effective in the challenging scenario of user independent recognition. This method is used to build a gesture-based user interaction module and is embedded into a system targeted to visually impaired which will also integrate several other modules. We present two of them: one for identifying wet floor signs, the other for automatic recognition of predefined logos.",
"title": ""
},
{
"docid": "565b07fee5a5812d04818fa132c0da4c",
"text": "PHP is the most popular scripting language for web applications. Because no native solution to compile or protect PHP scripts exists, PHP applications are usually shipped as plain source code which is easily understood or copied by an adversary. In order to prevent such attacks, commercial products such as ionCube, Zend Guard, and Source Guardian promise a source code protection. In this paper, we analyze the inner working and security of these tools and propose a method to recover the source code by leveraging static and dynamic analysis techniques. We introduce a generic approach for decompilation of obfuscated bytecode and show that it is possible to automatically recover the original source code of protected software. As a result, we discovered previously unknown vulnerabilities and backdoors in 1 million lines of recovered source code of 10 protected applications.",
"title": ""
},
{
"docid": "6286480f676c75e1cac4af9329227258",
"text": "Understanding physical phenomena is a key competence that enables humans and animals to act and interact under uncertain perception in previously unseen environments containing novel object and their configurations. Developmental psychology has shown that such skills are acquired by infants from observations at a very early stage. In this paper, we contrast a more traditional approach of taking a modelbased route with explicit 3D representations and physical simulation by an end-to-end approach that directly predicts stability and related quantities from appearance. We ask the question if and to what extent and quality such a skill can directly be acquired in a data-driven way— bypassing the need for an explicit simulation. We present a learning-based approach based on simulated data that predicts stability of towers comprised of wooden blocks under different conditions and quantities related to the potential fall of the towers. The evaluation is carried out on synthetic data and compared to human judgments on the same stimuli.",
"title": ""
},
{
"docid": "0674479836883d572b05af6481f27a0d",
"text": "Contents Preface vii Chapter 1. Graph Theory in the Information Age 1 1.1. Introduction 1 1.2. Basic definitions 3 1.3. Degree sequences and the power law 6 1.4. History of the power law 8 1.5. Examples of power law graphs 10 1.6. An outline of the book 17 Chapter 2. Old and New Concentration Inequalities 21 2.1. The binomial distribution and its asymptotic behavior 21 2.2. General Chernoff inequalities 25 2.3. More concentration inequalities 30 2.4. A concentration inequality with a large error estimate 33 2.5. Martingales and Azuma's inequality 35 2.6. General martingale inequalities 38 2.7. Supermartingales and Submartingales 41 2.8. The decision tree and relaxed concentration inequalities 46 Chapter 3. A Generative Model — the Preferential Attachment Scheme 55 3.1. Basic steps of the preferential attachment scheme 55 3.2. Analyzing the preferential attachment model 56 3.3. A useful lemma for rigorous proofs 59 3.4. The peril of heuristics via an example of balls-and-bins 60 3.5. Scale-free networks 62 3.6. The sharp concentration of preferential attachment scheme 64 3.7. Models for directed graphs 70 Chapter 4. Duplication Models for Biological Networks 75 4.1. Biological networks 75 4.2. The duplication model 76 4.3. Expected degrees of a random graph in the duplication model 77 4.4. The convergence of the expected degrees 79 4.5. The generating functions for the expected degrees 83 4.6. Two concentration results for the duplication model 84 4.7. Power law distribution of generalized duplication models 89 Chapter 5. Random Graphs with Given Expected Degrees 91 5.1. The Erd˝ os-Rényi model 91 5.2. The diameter of G n,p 95 iii iv CONTENTS 5.3. A general random graph model 97 5.4. Size, volume and higher order volumes 97 5.5. Basic properties of G(w) 100 5.6. Neighborhood expansion in random graphs 103 5.7. A random power law graph model 107 5.8. Actual versus expected degree sequence 109 Chapter 6. The Rise of the Giant Component 113 6.1. No giant component if w < 1? 114 6.2. Is there a giant component if˜w > 1? 115 6.3. No giant component if˜w < 1? 116 6.4. Existence and uniqueness of the giant component 117 6.5. A lemma on neighborhood growth 126 6.6. The volume of the giant component 129 6.7. Proving the volume estimate of the giant component 131 6.8. Lower bounds for the volume of the giant component 136 6.9. The complement of the giant component and its size 138 6.10. …",
"title": ""
},
{
"docid": "97e2de6bfce73c9a5fa0a474ded5b37a",
"text": "OBJECTIVE\nThis study was undertaken to determine the effects of rectovaginal fascia reattachment on symptoms and vaginal topography.\n\n\nSTUDY DESIGN\nStandardized preoperative and postoperative assessments of vaginal topography (the Pelvic Organ Prolapse staging system of the International Continence Society, American Urogynecologic Society, and Society of Gynecologic Surgeons) and 5 symptoms commonly attributed to rectocele were used to evaluate 66 women who underwent rectovaginal fascia reattachment for rectocele repair. All patients had abnormal fluoroscopic results with objective rectocele formation.\n\n\nRESULTS\nSeventy percent (n = 46) of the women were objectively assessed at 1 year. Preoperative symptoms included the following: protrusion, 85% (n = 39); difficult defecation, 52% (n = 24); constipation, 46% (n = 21); dyspareunia, 26% (n = 12); and manual evacuation, 24% (n = 11). Posterior vaginal topography was considered abnormal in all patients with a mean Ap point (a point located in the midline of the posterior vaginal wall 3 cm proximal to the hymen) value of -0.5 cm (range, -2 to 3 cm). Postoperative symptom resolution was as follows: protrusion, 90% (35/39; P <.0005); difficult defecation, 54% (14/24; P <.0005); constipation, 43% (9/21; P =.02); dyspareunia, 92% (11/12; P =.01); and manual evacuation, 36% (4/11; P =.125). Vaginal topography at 1 year was improved, with a mean Ap point value of -2 cm (range, -3 to 2 cm).\n\n\nCONCLUSION\nThis technique of rectocele repair improves vaginal topography and alleviates 3 symptoms commonly attributed to rectoceles. It is relatively ineffective for relief of manual evacuation, and constipation is variably decreased.",
"title": ""
},
{
"docid": "dc5c78f8f8e07e8b6b38e13bffeb3197",
"text": "A penetrating head injury belongs to the most severe traumatic brain injuries, in which communication can arise between the intracranial cavity and surrounding environment. The authors present a literature review and typical case reports of a penetrating head injury in children. The list of patients treated at the neurosurgical department in the last 5 years for penetrating TBI is briefly referred. Rapid transfer to the specialized center with subsequent urgent surgical treatment is the important point in the treatment algorithm. It is essential to clean the wound very properly with all the foreign material during the surgery and to close the dura with a water-tight suture. Wide-spectrum antibiotics are of great use. In case of large-extent brain damage, the use of anticonvulsants is recommended. The prognosis of such severe trauma could be influenced very positively by a good medical care organization; obviously, the extent of brain tissue laceration is the limiting factor.",
"title": ""
},
{
"docid": "c6de5f33ca775fb42db4667b0dcc74bf",
"text": "Robotic-assisted laparoscopic prostatectomy is a surgical procedure performed to eradicate prostate cancer. Use of robotic assistance technology allows smaller incisions than the traditional laparoscopic approach and results in better patient outcomes, such as less blood loss, less pain, shorter hospital stays, and better postoperative potency and continence rates. This surgical approach creates unique challenges in patient positioning for the perioperative team because the patient is placed in the lithotomy with steep Trendelenburg position. Incorrect positioning can lead to nerve damage, pressure ulcers, and other complications. Using a special beanbag positioning device made specifically for use with this severe position helps prevent these complications.",
"title": ""
},
{
"docid": "9b702c679d7bbbba2ac29b3a0c2f6d3b",
"text": "Mobile-edge computing (MEC) has recently emerged as a prominent technology to liberate mobile devices from computationally intensive workloads, by offloading them to the proximate MEC server. To make offloading effective, the radio and computational resources need to be dynamically managed, to cope with the time-varying computation demands and wireless fading channels. In this paper, we develop an online joint radio and computational resource management algorithm for multi-user MEC systems, with the objective of minimizing the long-term average weighted sum power consumption of the mobile devices and the MEC server, subject to a task buffer stability constraint. Specifically, at each time slot, the optimal CPU-cycle frequencies of the mobile devices are obtained in closed forms, and the optimal transmit power and bandwidth allocation for computation offloading are determined with the Gauss-Seidel method; while for the MEC server, both the optimal frequencies of the CPU cores and the optimal MEC server scheduling decision are derived in closed forms. Besides, a delay-improved mechanism is proposed to reduce the execution delay. Rigorous performance analysis is conducted for the proposed algorithm and its delay-improved version, indicating that the weighted sum power consumption and execution delay obey an $\\left [{O\\left ({1 / V}\\right), O\\left ({V}\\right) }\\right ]$ tradeoff with $V$ as a control parameter. Simulation results are provided to validate the theoretical analysis and demonstrate the impacts of various parameters.",
"title": ""
},
{
"docid": "6f1d7e2faff928c80898bfbf05ac0669",
"text": "This study examined level of engagement with Disney Princess media/products as it relates to gender-stereotypical behavior, body esteem (i.e. body image), and prosocial behavior during early childhood. Participants consisted of 198 children (Mage = 58 months), who were tested at two time points (approximately 1 year apart). Data consisted of parent and teacher reports, and child observations in a toy preference task. Longitudinal results revealed that Disney Princess engagement was associated with more female gender-stereotypical behavior 1 year later, even after controlling for initial levels of gender-stereotypical behavior. Parental mediation strengthened associations between princess engagement and adherence to female gender-stereotypical behavior for both girls and boys, and for body esteem and prosocial behavior for boys only.",
"title": ""
},
{
"docid": "ec49f419b86fc4276ceba06fd0208749",
"text": "In order to organize the large number of products listed in e-commerce sites, each product is usually assigned to one of the multi-level categories in the taxonomy tree. It is a time-consuming and difficult task for merchants to select proper categories within thousan ds of options for the products they sell. In this work, we propose an automatic classification tool to predict the matching category for a given product title and description. We used a combinatio n of two different neural models, i.e., deep belief nets and deep autoencoders, for both titles and descriptions. We implemented a selective reconstruction approach for the input layer during the training of the deep neural networks, in order to scale-out for large-sized sparse feature vectors. GPUs are utilized in order to train neural networks in a reasonable time. We have trained o ur m dels for around 150 million products with a taxonomy tree with at most 5 levels that contains 28,338 leaf categories. Tests with millions of products show that our first prediction s matches 81% of merchants’ assignments, when “others” categories are excluded.",
"title": ""
},
{
"docid": "c92892ac05025e7ce4dddf1669b43df6",
"text": "Joint torque sensing represents one of the foundations and vital components of modern robotic systems that target to match closely the physical interaction performance of biological systems through the realization of torque controlled actuators. However, despite decades of studies on the development of different torque sensors, the design of accurate and reliable torque sensors still remains challenging for the majority of the robotics community preventing the use of the technology. This letter proposes and evaluates two joint torque sensing elements based on strain gauge and deflection-encoder principles. The two designs are elaborated and their performance from different perspectives and practical factors are evaluated including resolution, nonaxial moments load crosstalk, torque ripple rejection, bandwidth, noise/residual offset level, and thermal/time dependent signal drift. The letter reveals the practical details and the pros and cons of each sensor principle providing valuable contributions into the field toward the realization of higher fidelity joint torque sensing performance.",
"title": ""
},
{
"docid": "52504a4825bf773ced200a675d291dde",
"text": "Natural Language Generation (NLG) is defined as the systematic approach for producing human understandable natural language text based on nontextual data or from meaning representations. This is a significant area which empowers human-computer interaction. It has also given rise to a variety of theoretical as well as empirical approaches. This paper intends to provide a detailed overview and a classification of the state-of-the-art approaches in Natural Language Generation. The paper explores NLG architectures and tasks classed under document planning, micro-planning and surface realization modules. Additionally, this paper also identifies the gaps existing in the NLG research which require further work in order to make NLG a widely usable technology.",
"title": ""
},
{
"docid": "e8f09d0b156f890839d18074eac1cc01",
"text": "This paper addresses the problems that must be considered if computers are going to treat their users as individuals with distinct personalities, goals, and so forth. It first outlines the issues, and then proposes stereotypes as a useful mechanism for building models of individual users on the basis of a small amount of information about them. In order to build user models quickly, a large amount of uncertain knowledge must be incorporated into the models. The issue of how to resolve the conflicts that will arise among such inferences is discussed. A system, Grundy, is described that bunds models of its users, with the aid of stereotypes, and then exploits those models to guide it in its task, suggesting novels that people may find interesting. If stereotypes are to be useful to Grundy, they must accurately characterize the users of the system. Some techniques to modify stereotypes on the basis of experience are discussed. An analysis of Grundy's performance shows that its user models are effective in guiding its performance.",
"title": ""
}
] | scidocsrr |
39ea6aeca6f9ce1124ba9e0bfd384686 | Causal video object segmentation from persistence of occlusions | [
{
"docid": "231554e78d509e7bca2dfd4280b411bb",
"text": "Layered models provide a compelling approach for estimating image motion and segmenting moving scenes. Previous methods, however, have failed to capture the structure of complex scenes, provide precise object boundaries, effectively estimate the number of layers in a scene, or robustly determine the depth order of the layers. Furthermore, previous methods have focused on optical flow between pairs of frames rather than longer sequences. We show that image sequences with more frames are needed to resolve ambiguities in depth ordering at occlusion boundaries; temporal layer constancy makes this feasible. Our generative model of image sequences is rich but difficult to optimize with traditional gradient descent methods. We propose a novel discrete approximation of the continuous objective in terms of a sequence of depth-ordered MRFs and extend graph-cut optimization methods with new “moves” that make joint layer segmentation and motion estimation feasible. Our optimizer, which mixes discrete and continuous optimization, automatically determines the number of layers and reasons about their depth ordering. We demonstrate the value of layered models, our optimization strategy, and the use of more than two frames on both the Middlebury optical flow benchmark and the MIT layer segmentation benchmark.",
"title": ""
}
] | [
{
"docid": "7b6cf139cae3e9dae8a2886ddabcfef0",
"text": "An enhanced automated material handling system (AMHS) that uses a local FOUP buffer at each tool is presented as a method of enabling lot size reduction and parallel metrology sampling in the photolithography (litho) bay. The local FOUP buffers can be integrated with current OHT AMHS systems in existing fabs with little or no change to the AMHS or process equipment. The local buffers enhance the effectiveness of the OHT by eliminating intermediate moves to stockers, increasing the move rate capacity by 15-20%, and decreasing the loadport exchange time to 30 seconds. These enhancements can enable the AMHS to achieve the high move rates compatible with lot size reduction down to 12-15 wafers per FOUP. The implementation of such a system in a photolithography bay could result in a 60-74% reduction in metrology delay time, which is the time between wafer exposure at a litho tool and collection of metrology and inspection data.",
"title": ""
},
{
"docid": "07172e8a37f21b8c6629c0a30da63bd3",
"text": "As one of the most influential social media platforms, microblogging is becoming increasingly popular in the last decades. Each day a large amount of events appear and spread in microblogging. The spreading of events and corresponding comments on them can greatly influence the public opinion. It is practical important to discover new emerging events in microblogging and predict their future popularity. Traditional event detection and information diffusion models cannot effectively handle our studied problem, because most existing methods focus only on event detection but ignore to predict their future trend. In this paper, we propose a new approach to detect burst novel events and predict their future popularity simultaneously. Specifically, we first detect events from online microblogging stream by utilizing multiple types of information, i.e., term frequency, and user's social relation. Meanwhile, the popularity of detected event is predicted through a proposed diffusion model which takes both the content and user information of the event into account. Extensive evaluations on two real-world datasets demonstrate the effectiveness of our approach on both event detection and their popularity",
"title": ""
},
{
"docid": "a21513f9cf4d5a0e6445772941e9fba2",
"text": "Superficial dorsal penile vein thrombosis was diagnosed 8 times in 7 patients between 19 and 40 years old (mean age 27 years). All patients related the onset of the thrombosis to vigorous sexual intercourse. No other etiological medications, drugs or constricting devices were implicated. Three patients were treated acutely with anti-inflammatory medications, while 4 were managed expectantly. The mean interval to resolution of symptoms was 7 weeks. Followup ranged from 3 to 30 months (mean 11) at which time all patients noticed normal erectile function. Only 1 patient had recurrent thrombosis 3 months after the initial episode, again related to intercourse. We conclude that this is a benign self-limited condition. Anti-inflammatory agents are useful for acute discomfort but they do not affect the rate of resolution.",
"title": ""
},
{
"docid": "e913d5a0d898df3db28b97b27757b889",
"text": "Speech-language pathologists tend to rely on the noninstrumental swallowing evaluation in making recommendations about a patient’s diet and management plan. The present study was designed to examine the sensitivity and specificity of the accuracy of using the chin-down posture during the clinical/bedside swallowing assessment. In 15 patients with acute stroke and clinically suspected oropharyngeal dysphagia, the correlation between clinical and videofluoroscopic findings was examined. Results identified that there is a difference in outcome prediction using the chin-down posture during the clinical/bedside assessment of swallowing compared to assessment by videofluoroscopy. Results are discussed relative to statistical and clinical perspectives, including site of lesion and factors to be considered in the design of an overall treatment plan for a patient with disordered swallowing.",
"title": ""
},
{
"docid": "ac740402c3e733af4d690e34e567fabe",
"text": "We address the problem of semantic segmentation: classifying each pixel in an image according to the semantic class it belongs to (e.g. dog, road, car). Most existing methods train from fully supervised images, where each pixel is annotated by a class label. To reduce the annotation effort, recently a few weakly supervised approaches emerged. These require only image labels indicating which classes are present. Although their performance reaches a satisfactory level, there is still a substantial gap between the accuracy of fully and weakly supervised methods. We address this gap with a novel active learning method specifically suited for this setting. We model the problem as a pairwise CRF and cast active learning as finding its most informative nodes. These nodes induce the largest expected change in the overall CRF state, after revealing their true label. Our criterion is equivalent to maximizing an upper-bound on accuracy gain. Experiments on two data-sets show that our method achieves 97% percent of the accuracy of the corresponding fully supervised model, while querying less than 17% of the (super-)pixel labels.",
"title": ""
},
{
"docid": "c56c71775a0c87f7bb6c59d6607e5280",
"text": "A correlational study examined relationships between motivational orientation, self-regulated learning, and classroom academic performance for 173 seventh graders from eight science and seven English classes. A self-report measure of student self-efficacy, intrinsic value, test anxiety, self-regulation, and use of learning strategies was administered, and performance data were obtained from work on classroom assignments. Self-efficacy and intrinsic value were positively related to cognitive engagement and performance. Regression analyses revealed that, depending on the outcome measure, self-regulation, self-efficacy, and test anxiety emerged as the best predictors of performance. Intrinsic value did not have a direct influence on performance but was strongly related to self-regulation and cognitive strategy use, regardless of prior achievement level. The implications of individual differences in motivational orientation for cognitive engagement and self-regulation in the classroom are discussed.",
"title": ""
},
{
"docid": "00f31f21742a843ce6c4a00f3f6e6259",
"text": "Recent developments in digital technologies bring about considerable business opportunities but also impose significant challenges on firms in all industries. While some industries, e.g., newspapers, have already profoundly reorganized the mechanisms of value creation, delivery, and capture during the course of digitalization (Karimi & Walter, 2015, 2016), many process-oriented and asset intensive industries have not yet fully evaluated and exploited the potential applications (Rigby, 2014). Although the process industries have successfully used advancements in technologies to optimize processes in the past (Kim et al., 2011), digitalization poses an unprecedented shift in technology that exceeds conventional technological evolution (Svahn et al., 2017). Driven by augmented processing power, connectivity of devices (IoT), advanced data analytics, and sensor technology, innovation activities in the process industries now break away from established innovation paths (Svahn et al., 2017; Tripsas, 2009). In contrast to prior innovations that were primarily bound to physical devices, new products are increasingly embedded into systems of value creation that span the physical and digital world (Parmar et al., 2014; Rigby, 2014; Yoo et al., 2010a). On this new playing field, firms and researchers are jointly interested in the organizational characteristics and capabilities that are required to gain a competitive advantage (e.g. Fink, 2011). Whereas prior studies cover the effect of digital transformation on innovation in various industries like newspaper (Karimi and Walter, 2015, 2016), automotive (Henfridsson and Yoo, 2014; Svahn et al., 2017), photography (Tripsas, 2009), and manufacturing (Jonsson et al., 2008), there is a relative dearth of studies that cover the impact of digital transformation in the process industries (Westergren and Holmström, 2012). The process industries are characterized by asset and research intensity, strong integration into physical locations, and often include value chains that are complex and feature aspects of rigidity (Lager Research Paper Digitalization in the process industries – Evidence from the German water industry",
"title": ""
},
{
"docid": "c55e7c3825980d0be4546c7fadc812fe",
"text": "Individual graphene oxide sheets subjected to chemical reduction were electrically characterized as a function of temperature and external electric fields. The fully reduced monolayers exhibited conductivities ranging between 0.05 and 2 S/cm and field effect mobilities of 2-200 cm2/Vs at room temperature. Temperature-dependent electrical measurements and Raman spectroscopic investigations suggest that charge transport occurs via variable range hopping between intact graphene islands with sizes on the order of several nanometers. Furthermore, the comparative study of multilayered sheets revealed that the conductivity of the undermost layer is reduced by a factor of more than 2 as a consequence of the interaction with the Si/SiO2 substrate.",
"title": ""
},
{
"docid": "06f9780257311891f54c5d0c03e73c1a",
"text": "This essay extends Simon's arguments in the Sciences of the Artificial to a critical examination of how theorizing in Information Technology disciplines should occur. The essay is framed around a number of fundamental questions that relate theorizing in the artificial sciences to the traditions of the philosophy of science. Theorizing in the artificial sciences is contrasted with theorizing in other branches of science and the applicability of the scientific method is questioned. The paper argues that theorizing should be considered in a holistic manner that links two modes of theorizing: an interior mode with the how of artifact construction studied and an exterior mode with the what of existing artifacts studied. Unlike some representations in the design science movement the paper argues that the study of artifacts once constructed can not be passed back uncritically to the methods of traditional science. Seven principles for creating knowledge in IT disciplines are derived: (i) artifact system centrality; (ii) artifact purposefulness; (iii) need for design theory; (iv) induction and abduction in theory building; (v) artifact construction as theory building; (vi) interior and exterior modes for theorizing; and (viii) issues with generality. The implicit claim is that consideration of these principles will improve knowledge creation and theorizing in design disciplines, for both design science researchers and also for researchers using more traditional methods. Further, attention to these principles should lead to the creation of more useful and relevant knowledge.",
"title": ""
},
{
"docid": "298cee7d5283cae1debcaf40ce18eb42",
"text": "Fluidic circuits made up of tiny chambers, conduits, and membranes can be fabricated in soft substrates to realize pressure-based sequential logic functions. Additional chambers in the same substrate covered with thin membranes can function as bubble-like tactile features. Sequential addressing of bubbles with fluidic logic enables just two external electronic valves to control of any number of tactile features by \"clocking in\" pressure states one at a time. But every additional actuator added to a shift register requires an additional clock pulse to address, so that the display refresh rate scales inversely with the number of actuators in an array. In this paper, we build a model of a fluidic logic circuit that can be used for sequential addressing of bubble actuators. The model takes the form of a hybrid automaton combining the discrete dynamics of valve switching and the continuous dynamics of compressible fluid flow through fluidic resistors (conduits) and capacitors (chambers). When parameters are set according to the results of system identification experiments on a physical prototype, pressure trajectories and propagation delays predicted by simulation of the hybrid automaton compare favorably to experiment. The propagation delay in turn determines the maximum clock rate and associated refresh rate for a refreshable braille display intended for rendering a full page of braille text or tactile graphics.",
"title": ""
},
{
"docid": "34c3ba06f9bffddec7a08c8109c7f4b9",
"text": "The role of e-learning technologies entirely depends on the acceptance and execution of required-change in the thinking and behaviour of the users of institutions. The research are constantly reporting that many e-learning projects are falling short of their objectives due to many reasons but on the top is the user resistance to change according to the digital requirements of new era. It is argued that the suitable way for change management in e-learning environment is the training and persuading of users with a view to enhance their digital literacy and thus gradually changing the users’ attitude in positive direction. This paper discusses change management in transition to e-learning system considering pedagogical, cost and technical implications. It also discusses challenges and opportunities for integrating these technologies in higher learning institutions with examples from Turkey GATA (Gülhane Askeri Tıp Akademisi-Gülhane Military Medical Academy).",
"title": ""
},
{
"docid": "9a79a9b2c351873143a8209d37b46f64",
"text": "The authors review research on police effectiveness in reducing crime, disorder, and fear in the context of a typology of innovation in police practices. That typology emphasizes two dimensions: one concerning the diversity of approaches, and the other, the level of focus. The authors find that little evidence supports the standard model of policing—low on both of these dimensions. In contrast, research evidence does support continued investment in police innovations that call for greater focus and tailoring of police efforts, combined with an expansion of the tool box of policing beyond simple law enforcement. The strongest evidence of police effectiveness in reducing crime and disorder is found in the case of geographically focused police practices, such as hot-spots policing. Community policing practices are found to reduce fear of crime, but the authors do not find consistent evidence that community policing (when it is implemented without models of problem-oriented policing) affects either crime or disorder. A developing body of evidence points to the effectiveness of problemoriented policing in reducing crime, disorder, and fear. More generally, the authors find that many policing practices applied broadly throughout the United States either have not been the subject of systematic research or have been examined in the context of research designs that do not allow practitioners or policy makers to draw very strong conclusions.",
"title": ""
},
{
"docid": "89552cbc1d432bdbf26b4213b6fc80cc",
"text": "Tuberculosis, also called TB, is currently a major health hazard due to multidrug-resistant forms of bacilli. Global efforts are underway to eradicate TB using new drugs with new modes of action, higher activity, and fewer side effects in combination with vaccines. For this reason, unexplored new sources and previously explored sources were examined and around 353 antimycobacterial compounds (Nat Prod Rep 2007; 24: 278-297) 7 have been previously reported. To develop drugs from these new sources, additional work is required for preclinical and clinical results. Since ancient times, different plant part extracts have been used as traditional medicines against diseases including tuberculosis. This knowledge may be useful in developing future powerful drugs. Plant natural products are again becoming important in this regard. In this review, we report 127 antimycobacterial compounds and their antimycobacterial activities. Of these, 27 compounds had a minimum inhibitory concentration of < 10 µg/mL. In some cases, the mechanism of activity has been determined. We hope that some of these compounds may eventually develop into effective new drugs against tuberculosis.",
"title": ""
},
{
"docid": "049674034f41b359a7db7b3c5ba7c541",
"text": "This paper extends and contributes to emerging debates on the validation of interpretive research (IR) in management accounting. We argue that IR has the potential to produce not only subjectivist, emic understandings of actors’ meanings, but also explanations, characterised by a certain degree of ‘‘thickness”. Mobilising the key tenets of the modern philosophical theory of explanation and the notion of abduction, grounded in pragmatist epistemology, we explicate how explanations may be developed and validated, yet remaining true to the core premises of IR. We focus on the intricate relationship between two arguably central aspects of validation in IR, namely authenticity and plausibility. Working on the assumption that validation is an important, but potentially problematic concern in all serious scholarly research, we explore whether and how validation efforts are manifest in IR using two case studies as illustrative examples. Validation is seen as an issue of convincing readers of the authenticity of research findings whilst simultaneously ensuring that explanations are deemed plausible. Whilst the former is largely a matter of preserving the emic qualities of research accounts, the latter is intimately linked to the process of abductive reasoning, whereby different theories are applied to advance thick explanations. This underscores the view of validation as a process, not easily separated from the ongoing efforts of researchers to develop explanations as research projects unfold and far from reducible to mere technicalities of following pre-specified criteria presumably minimising various biases. These properties detract from a view of validation as conforming to prespecified, stable, and uniform criteria and allow IR to move beyond the ‘‘crisis of validity” arguably prevailing in the social sciences. 2009 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "47d6d85b9b902d7078c6daf9402f4b4c",
"text": "Doxorubicin (DOX) is a very effective anticancer agent. However, in its pure form, its application is limited by significant cardiotoxic side effects. The purpose of this study was to develop a controllably activatable chemotherapy prodrug of DOX created by blocking its free amine group with a biotinylated photocleavable blocking group (PCB). An n-hydroxy succunamide protecting group on the PCB allowed selective binding at the DOX active amine group. The PCB included an ortho-nitrophenyl group for photo cleavability and a water-soluble glycol spacer arm ending in a biotin group for enhanced membrane interaction. This novel DOX-PCB prodrug had a 200-fold decrease in cytotoxicity compared to free DOX and could release active DOX upon exposure to UV light at 350 nm. Unlike DOX, DOX-PCB stayed in the cell cytoplasm, did not enter the nucleus, and did not stain the exposed DNA during mitosis. Human liver microsome incubation with DOX-PCB indicated stability against liver metabolic breakdown. The development of the DOX-PCB prodrug demonstrates the possibility of using light as a method of prodrug activation in deep internal tissues without relying on inherent physical or biochemical differences between the tumor and healthy tissue for use as the trigger.",
"title": ""
},
{
"docid": "562cf2d0bc59f0fde4d7377f1d5058a2",
"text": "The fields of neuroscience and artificial intelligence (AI) have a long and intertwined history. In more recent times, however, communication and collaboration between the two fields has become less commonplace. In this article, we argue that better understanding biological brains could play a vital role in building intelligent machines. We survey historical interactions between the AI and neuroscience fields and emphasize current advances in AI that have been inspired by the study of neural computation in humans and other animals. We conclude by highlighting shared themes that may be key for advancing future research in both fields.",
"title": ""
},
{
"docid": "0c8b192807a6728be21e6a19902393c0",
"text": "The balance between facilitation and competition is likely to change with age due to the dynamic nature of nutrient, water and carbon cycles, and light availability during stand development. These processes have received attention in harsh, arid, semiarid and alpine ecosystems but are rarely examined in more productive communities, in mixed-species forest ecosystems or in long-term experiments spanning more than a decade. The aim of this study was to examine how inter- and intraspecific interactions between Eucalyptus globulus Labill. mixed with Acacia mearnsii de Wildeman trees changed with age and productivity in a field experiment in temperate south-eastern Australia. Spatially explicit neighbourhood indices were calculated to quantify tree interactions and used to develop growth models to examine how the tree interactions changed with time and stand productivity. Interspecific influences were usually less negative than intraspecific influences, and their difference increased with time for E. globulus and decreased with time for A. mearnsii. As a result, the growth advantages of being in a mixture increased with time for E. globulus and decreased with time for A. mearnsii. The growth advantage of being in a mixture also decreased for E. globulus with increasing stand productivity, showing that spatial as well as temporal dynamics in resource availability influenced the magnitude and direction of plant interactions.",
"title": ""
},
{
"docid": "a8b7d6b3a43d39c8200e7787c3d58a0e",
"text": "Being Scrum the agile software development framework most commonly used in the software industry, its applicability is attracting great attention to the academia. That is why this topic is quite often included in computer science and related university programs. In this article, we present a course design of a Software Engineering course where an educational framework and an open-source agile project management tool were used to develop real-life projects by undergraduate students. During the course, continuous guidance was given by the teaching staff to facilitate the students' learning of Scrum. Results indicate that students find it easy to use the open-source tool and helpful to apply Scrum to a real-life project. However, the unavailability of the client and conflicts among the team members have negative impact on the realization of projects. The guidance given to students along the course helped identify five common issues faced by students through the learning process.",
"title": ""
},
{
"docid": "f02bd91e8374506aa4f8a2107f9545e6",
"text": "In an online survey with two cohorts (2009 and 2011) of undergraduates in dating relationshi ps, we examined how attachment was related to communication technology use within romantic relation ships. Participants reported on their attachment style and frequency of in-person communication as well as phone, text messaging, social network site (SNS), and electronic mail usage with partners. Texting and SNS communication were more frequent in 2011 than 2009. Attachment avoidance was related to less frequent phone use and texting, and greater email usage. Electronic communication channels (phone and texting) were related to positive relationship qualities, however, once accounting for attachment, only moderated effects were found. Interactions indicated texting was linked to more positive relationships for highly avoidant (but not less avoidant) participants. Additionally, email use was linked to more conflict for highly avoidant (but not less avoidant) participants. Finally, greater use of a SNS was positively associated with intimacy/support for those higher (but not lower) on attachment anxiety. This study illustrates how attachment can help to explain why the use of specific technology-based communication channels within romantic relationships may mean different things to different people, and that certain channels may be especially relevant in meeting insecurely attached individuals’ needs. 2013 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "41ceb618f20b82eaa65588045b609dcb",
"text": "In decision making under uncertainty there are two main questions that need to be evaluated: i) What are the future consequences and associated uncertainties of an action, and ii) what is a good (or right) decision or action. Philosophically these issues are categorised as epistemic questions (i.e. questions of knowledge) and ethical questions (i.e. questions of moral and norms). This paper discusses the second issue, and evaluates different bases for a good decision, using different ethical theories as a starting point. This includes the utilitarian ethics of Bentley and Mills, and deontological ethics of Kant, Rawls and Habermas. The paper addresses various principles in risk management and risk related decision making, including cost benefit analysis, minimum safety criterion, the ALARP principle and the precautionary principle.",
"title": ""
}
] | scidocsrr |
c5dac05d59a8bb220f675b6f8fb1a481 | Classification of histopathological images using convolutional neural network | [
{
"docid": "7fefe01183ad6c9c897b83f9b9bbe5be",
"text": "The Pap smear test is a manual screening procedure that is used to detect precancerous changes in cervical cells based on color and shape properties of their nuclei and cytoplasms. Automating this procedure is still an open problem due to the complexities of cell structures. In this paper, we propose an unsupervised approach for the segmentation and classification of cervical cells. The segmentation process involves automatic thresholding to separate the cell regions from the background, a multi-scale hierarchical segmentation algorithm to partition these regions based on homogeneity and circularity, and a binary classifier to finalize the separation of nuclei from cytoplasm within the cell regions. Classification is posed as a grouping problem by ranking the cells based on their feature characteristics modeling abnormality degrees. The proposed procedure constructs a tree using hierarchical clustering, and then arranges the cells in a linear order by using an optimal leaf ordering algorithm that maximizes the similarity of adjacent leaves without any requirement for training examples or parameter adjustment. Performance evaluation using two data sets show the effectiveness of the proposed approach in images having inconsistent staining, poor contrast, and overlapping cells.",
"title": ""
}
] | [
{
"docid": "aa7114bf0038f2ab4df6908ed7d28813",
"text": "Sematch is an integrated framework for the development, evaluation and application of semantic similarity for Knowledge Graphs. The framework provides a number of similarity tools and datasets, and allows users to compute semantic similarity scores of concepts, words, and entities, as well as to interact with Knowledge Graphs through SPARQL queries. Sematch focuses on knowledge-based semantic similarity that relies on structural knowledge in a given taxonomy (e.g. depth, path length, least common subsumer), and statistical information contents. Researchers can use Sematch to develop and evaluate semantic similarity metrics and exploit these metrics in applications. © 2017 Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "39912e18a03d78e9e4129856f1fbf2e5",
"text": "Ridge regression is an algorithm that takes as input a large number of data points and finds the best-fit linear curve through these points. The algorithm is a building block for many machine-learning operations. We present a system for privacy-preserving ridge regression. The system outputs the best-fit curve in the clear, but exposes no other information about the input data. Our approach combines both homomorphic encryption and Yao garbled circuits, where each is used in a different part of the algorithm to obtain the best performance. We implement the complete system and experiment with it on real data-sets, and show that it significantly outperforms pure implementations based only on homomorphic encryption or Yao circuits.",
"title": ""
},
{
"docid": "9eafc698b64e6042d8e7a23c9b2cce0c",
"text": "Though convolutional neural networks have achieved stateof-the-art performance on various vision tasks, they are extremely vulnerable to adversarial examples, which are obtained by adding humanimperceptible perturbations to the original images. Adversarial examples can thus be used as an useful tool to evaluate and select the most robust models in safety-critical applications. However, most of the existing adversarial attacks only achieve relatively low success rates under the challenging black-box setting, where the attackers have no knowledge of the model structure and parameters. To this end, we propose to improve the transferability of adversarial examples by creating diverse input patterns. Instead of only using the original images to generate adversarial examples, our method applies random transformations to the input images at each iteration. Extensive experiments on ImageNet show that the proposed attack method can generate adversarial examples that transfer much better to different networks than existing baselines. To further improve the transferability, we (1) integrate the recently proposed momentum method into the attack process; and (2) attack an ensemble of networks simultaneously. By evaluating our method against top defense submissions and official baselines from NIPS 2017 adversarial competition, this enhanced attack reaches an average success rate of 73.0%, which outperforms the top 1 attack submission in the NIPS competition by a large margin of 6.6%. We hope that our proposed attack strategy can serve as a benchmark for evaluating the robustness of networks to adversaries and the effectiveness of different defense methods in future. The code is public available at https://github.com/cihangxie/DI-2-FGSM.",
"title": ""
},
{
"docid": "afa8b1315f051fa6f683f63d58fcc3d4",
"text": "Our opinions and judgments are increasingly shaped by what we read on social media -- whether they be tweets and posts in social networks, blog posts, or review boards. These opinions could be about topics such as consumer products, politics, life style, or celebrities. Understanding how users in a network update opinions based on their neighbor's opinions, as well as what global opinion structure is implied when users iteratively update opinions, is important in the context of viral marketing and information dissemination, as well as targeting messages to users in the network.\n In this paper, we consider the problem of modeling how users update opinions based on their neighbors' opinions. We perform a set of online user studies based on the celebrated conformity experiments of Asch [1]. Our experiments are carefully crafted to derive quantitative insights into developing a model for opinion updates (as opposed to deriving psychological insights). We show that existing and widely studied theoretical models do not explain the entire gamut of experimental observations we make. This leads us to posit a new, nuanced model that we term the BVM. We present preliminary theoretical and simulation results on the convergence and structure of opinions in the entire network when users iteratively update their respective opinions according to the BVM. We show that consensus and polarization of opinions arise naturally in this model under easy to interpret initial conditions on the network.",
"title": ""
},
{
"docid": "0b18f7966a57e266487023d3a2f3549d",
"text": "A clear andpowerfulformalism for describing languages, both natural and artificial, follows f iom a method for expressing grammars in logic due to Colmerauer and Kowalski. This formalism, which is a natural extension o f context-free grammars, we call \"definite clause grammars\" (DCGs). A DCG provides not only a description of a language, but also an effective means for analysing strings o f that language, since the DCG, as it stands, is an executable program o f the programming language Prolog. Using a standard Prolog compiler, the DCG can be compiled into efficient code, making it feasible to implement practical language analysers directly as DCGs. This paper compares DCGs with the successful and widely used augmented transition network (ATN) formalism, and indicates how ATNs can be translated into DCGs. It is argued that DCGs can be at least as efficient as ATNs, whilst the DCG formalism is clearer, more concise and in practice more powerful",
"title": ""
},
{
"docid": "9e9dd203746a1bd4024632abeb80fb0a",
"text": "Translating data from linked data sources to the vocabulary that is expected by a linked data application requires a large number of mappings and can require a lot of structural transformations as well as complex property value transformations. The R2R mapping language is a language based on SPARQL for publishing expressive mappings on the web. However, the specification of R2R mappings is not an easy task. This paper therefore proposes the use of mapping patterns to semi-automatically generate R2R mappings between RDF vocabularies. In this paper, we first specify a mapping language with a high level of abstraction to transform data from a source ontology to a target ontology vocabulary. Second, we introduce the proposed mapping patterns. Finally, we present a method to semi-automatically generate R2R mappings using the mapping",
"title": ""
},
{
"docid": "fe536ac94342c96f6710afb4a476278b",
"text": "The human arm has 7 degrees of freedom (DOF) while only 6 DOF are required to position the wrist and orient the palm. Thus, the inverse kinematics of an human arm has a nonunique solution. Resolving this redundancy becomes critical as the human interacts with a wearable robot and the inverse kinematics solution of these two coupled systems must be identical to guarantee an seamless integration. The redundancy of the arm can be formulated by defining the swivel angle, the rotation angle of the plane defined by the upper and lower arm around a virtual axis that connects the shoulder and wrist joints. Analyzing reaching tasks recorded with a motion capture system indicates that the swivel angle is selected such that when the elbow joint is flexed, the palm points to the head. Based on these experimental results, a new criterion is formed to resolve the human arm redundancy. This criterion was implemented into the control algorithm of an upper limb 7-DOF wearable robot. Experimental results indicate that by using the proposed redundancy resolution criterion, the error between the predicted and the actual swivel angle adopted by the motor control system is less then 5°.",
"title": ""
},
{
"docid": "d29634888a4f1cee1ed613b0f038ddb3",
"text": "This work investigates the use of linguistically motivated features to improve search, in particular for ranking answers to non-factoid questions. We show that it is possible to exploit existing large collections of question–answer pairs (from online social Question Answering sites) to extract such features and train ranking models which combine them effectively. We investigate a wide range of feature types, some exploiting natural language processing such as coarse word sense disambiguation, named-entity identification, syntactic parsing, and semantic role labeling. Our experiments demonstrate that linguistic features, in combination, yield considerable improvements in accuracy. Depending on the system settings we measure relative improvements of 14% to 21% in Mean Reciprocal Rank and Precision@1, providing one of the most compelling evidence to date that complex linguistic features such as word senses and semantic roles can have a significant impact on large-scale information retrieval tasks.",
"title": ""
},
{
"docid": "dcd705e131eb2b60c54ff5cb6ae51555",
"text": "Comprehension is one fundamental process in the software life cycle. Although necessary, this comprehension is difficult to obtain due to amount and complexity of information related to software. Thus, software visualization techniques and tools have been proposed to facilitate the comprehension process and to reduce maintenance costs. This paper shows the results from a Literature Systematic Review to identify software visualization techniques and tools. We analyzed 52 papers and we identified 28 techniques and 33 tools for software visualization. Among these techniques, 71% have been implemented and available to users, 48% use 3D visualization and 80% are generated using static analysis.",
"title": ""
},
{
"docid": "6d4f74f9d6b79f7f94fc4e12df28998e",
"text": "We introduce MySong, a system that automatically chooses chords to accompany a vocal melody. A user with no musical experience can create a song with instrumental accompaniment just by singing into a microphone, and can experiment with different styles and chord patterns using interactions designed to be intuitive to non-musicians.\n We describe the implementation of MySong, which trains a Hidden Markov Model using a music database and uses that model to select chords for new melodies. Model parameters are intuitively exposed to the user. We present results from a study demonstrating that chords assigned to melodies using MySong and chords assigned manually by musicians receive similar subjective ratings. We then present results from a second study showing that thirteen users with no background in music theory are able to rapidly create musical accompaniments using MySong, and that these accompaniments are rated positively by evaluators.",
"title": ""
},
{
"docid": "18b173283a1eb58170982504bec7484f",
"text": "Database forensics is a domain that uses database content and metadata to reveal malicious activities on database systems in an Internet of Things environment. Although the concept of database forensics has been around for a while, the investigation of cybercrime activities and cyber breaches in an Internet of Things environment would benefit from the development of a common investigative standard that unifies the knowledge in the domain. Therefore, this paper proposes common database forensic investigation processes using a design science research approach. The proposed process comprises four phases, namely: 1) identification; 2) artefact collection; 3) artefact analysis; and 4) the documentation and presentation process. It allows the reconciliation of the concepts and terminologies of all common database forensic investigation processes; hence, it facilitates the sharing of knowledge on database forensic investigation among domain newcomers, users, and practitioners.",
"title": ""
},
{
"docid": "34d2c2349291bed154ef29f2f5472cb5",
"text": "We present a novel algorithm for automatically co-segmenting a set of shapes from a common family into consistent parts. Starting from over-segmentations of shapes, our approach generates the segmentations by grouping the primitive patches of the shapes directly and obtains their correspondences simultaneously. The core of the algorithm is to compute an affinity matrix where each entry encodes the similarity between two patches, which is measured based on the geometric features of patches. Instead of concatenating the different features into one feature descriptor, we formulate co-segmentation into a subspace clustering problem in multiple feature spaces. Specifically, to fuse multiple features, we propose a new formulation of optimization with a consistent penalty, which facilitates both the identification of most similar patches and selection of master features for two similar patches. Therefore the affinity matrices for various features are sparsity-consistent and the similarity between a pair of patches may be determined by part of (instead of all) features. Experimental results have shown how our algorithm jointly extracts consistent parts across the collection in a good manner.",
"title": ""
},
{
"docid": "1b5a28c875cf49eadac7032d3dd6398f",
"text": "This paper proposes a new approach to style, arising from our work on computational media using structural blending, which enriches the conceptual blending of cognitive linguistics with structure building operations in order to encompass syntax and narrative as well as metaphor. We have implemented both conceptual and structural blending, and conducted initial experiments with poetry, although the approach generalizes to other media. The central idea is to analyze style in terms of principles for blending, based on our £nding that very different principles from those of common sense blending are needed for some creative works.",
"title": ""
},
{
"docid": "0d3e55a7029d084f6ba889b7d354411c",
"text": "Electrophysiological and computational studies suggest that nigro-striatal dopamine may play an important role in learning about sequences of environmentally important stimuli, particularly when this learning is based upon step-by-step associations between stimuli, such as in second-order conditioning. If so, one would predict that disruption of the midbrain dopamine system--such as occurs in Parkinson's disease--may lead to deficits on tasks that rely upon such learning processes. This hypothesis was tested using a \"chaining\" task, in which each additional link in a sequence of stimuli leading to reward is trained step-by-step, until a full sequence is learned. We further examined how medication (L-dopa) affects this type of learning. As predicted, we found that Parkinson's patients tested 'off' L-dopa performed as well as controls during the first phase of this task, when required to learn a simple stimulus-response association, but were impaired at learning the full sequence of stimuli. In contrast, we found that Parkinson's patients tested 'on' L-dopa performed better than those tested 'off', and no worse than controls, on all phases of the task. These findings suggest that the loss of dopamine that occurs in Parkinson's disease can lead to specific learning impairments that are predicted by electrophysiological and computational studies, and that enhancing dopamine levels with L-dopa alleviates this deficit. This last result raises questions regarding the mechanisms by which midbrain dopamine modulates learning in Parkinson's disease, and how L-dopa affects these processes.",
"title": ""
},
{
"docid": "21c1be0458cc6908c3f7feb6591af841",
"text": "Initial work on automatic emotion recognition concentrates mainly on audio-based emotion classification. Speech is the most important channel for the communication between humans and it may expected that emotional states are trans-fered though content, prosody or paralinguistic cues. Besides the audio modality, with the rapidly developing computer hardware and video-processing devices researches start exploring the video modality. Visual-based emotion recognition works focus mainly on the extraction and recognition of emotional information from the facial expressions. There are also attempts to classify emotional states from body or head gestures and to combine different visual modalities, for instance facial expressions and body gesture captured by two separate cameras [3]. Emotion recognition from psycho-physiological measurements, such as skin conductance, respiration, electro-cardiogram (ECG), electromyography (EMG), electroencephalography (EEG) is another attempt. In contrast to speech, gestures or facial expressions these biopotentials are the result of the autonomic nervous system and cannot be imitated [4]. Research activities in facial expression and speech based emotion recognition [6] are usually performed independently from each other. But in almost all practical applications people speak and exhibit facial expressions at the same time, and consequently both modalities should be used in order to perform robust affect recognition. Therefore, multimodal, and in particularly audiovisual emotion recognition has been emerging in recent times [11], for example multiple classifier systems have been widely investigated for the classification of human emotions [1, 9, 12, 14]. Combining classifiers is a promising approach to improve the overall classifier performance [13, 8]. In multiple classifier systems (MCS) it is assumed that the raw data X originates from an underlying source, but each classifier receives different subsets of (X) of the same raw input data X. Feature vector F j (X) are used as the input to the j−th classifier computing an estimate y j of the class membership of F j (X). This output y j might be a crisp class label or a vector of class memberships, e.g. estimates of posteriori probabilities. Based on the multiple classifier outputs y 1 ,. .. , y N the combiner produces the final decision y. Combiners used in this study are fixed transformations of the multiple classifier outputs y 1 ,. .. , y N. Examples of such combining rules are Voting, (weighted) Averaging, and Multiplying, just to mention the most popular types. 2 Friedhelm Schwenker In addition to a priori fixed combination rules the combiner can be a …",
"title": ""
},
{
"docid": "8920d6f0faa1f46ca97306f4d59897d9",
"text": "Tactile augmentation is a simple, safe, inexpensive interaction technique for adding physical texture and force feedback cues to virtual objects. This study explored whether virtual reality (VR) exposure therapy reduces fear of spiders and whether giving patients the illusion of physically touching the virtual spider increases treatment effectiveness. Eight clinically phobic students were randomly assigned to one of 3 groups—(a) no treatment, (b) VR with no tactile cues, or (c) VR with a physically “touchable” virtual spider—as were 28 nonclinically phobic students. Participants in the 2 VR treatment groups received three 1-hr exposure therapy sessions resulting in clinically significant drops in behavioral avoidance and subjective fear ratings. The tactile augmentation group showed the greatest progress on behavioral measures. On average, participants in this group, who only approached to 5.5 ft of a live spider on the pretreatment Behavioral Avoidance Test (Garcia-Palacios, 2002), were able to approach to 6 in. of the spider after VR exposure treatment and did so with much less anxiety (see www.vrpain.com for details). Practical implications are discussed. INTERNATIONAL JOURNAL OF HUMAN–COMPUTER INTERACTION, 16(2), 283–300 Copyright © 2003, Lawrence Erlbaum Associates, Inc.",
"title": ""
},
{
"docid": "be5e98bb924a81baa561a3b3870c4a76",
"text": "Objective: Mastitis is one of the most costly diseases in dairy cows, which greatly decreases milk production. Use of antibiotics in cattle leads to antibiotic-resistance of mastitis-causing bacteria. The present study aimed to investigate synergistic effect of silver nanoparticles (AgNPs) with neomycin or gentamicin antibiotic on mastitis-causing Staphylococcus aureus. Materials and Methods: In this study, 46 samples of milk were taken from the cows with clinical and subclinical mastitis during the august-October 2015 sampling period. In addition to biochemical tests, nuc gene amplification by PCR was used to identify strains of Staphylococcus aureus. Disk diffusion test and microdilution were performed to determine minimum inhibitory concentration (MIC) and minimum bactericidal concentration (MBC). Fractional Inhibitory Concentration (FIC) index was calculated to determine the interaction between a combination of AgNPs and each one of the antibiotics. Results: Twenty strains of Staphylococcus aureus were isolated from 46 milk samples and were confirmed by PCR. Based on disk diffusion test, 35%, 10% and 55% of the strains were respectively susceptible, moderately susceptible and resistant to gentamicin. In addition, 35%, 15% and 50% of the strains were respectively susceptible, moderately susceptible and resistant to neomycin. According to FIC index, gentamicin antibiotic and AgNPs had synergistic effects in 50% of the strains. Furthermore, neomycin antibiotic and AgNPs had synergistic effects in 45% of the strains. Conclusion: It could be concluded that a combination of AgNPs with either gentamicin or neomycin showed synergistic antibacterial properties in Staphylococcus aureus isolates from mastitis. In addition, some hypotheses were proposed to explain antimicrobial mechanism of the combination.",
"title": ""
},
{
"docid": "f3ee129af2a833f8775c5366c188d71c",
"text": "Strong motivation for developing new prosthetic hand devices is provided by the fact that low functionality and controllability—in addition to poor cosmetic appearance—are the most important reasons why amputees do not regularly use their prosthetic hands. This paper presents the design of the CyberHand, a cybernetic anthropomorphic hand intended to provide amputees with functional hand replacement. Its design was bio-inspired in terms of its modular architecture, its physical appearance, kinematics, sensorization, and actuation, and its multilevel control system. Its underactuated mechanisms allow separate control of each digit as well as thumb–finger opposition and, accordingly, can generate a multitude of grasps. Its sensory system was designed to provide proprioceptive information as well as to emulate fundamental functional properties of human tactile mechanoreceptors of specific importance for grasp-and-hold tasks. The CyberHand control system presumes just a few efferent and afferent channels and was divided in two main layers: a high-level control that interprets the user’s intention (grasp selection and required force level) and can provide pertinent sensory feedback and a low-level control responsible for actuating specific grasps and applying the desired total force by taking advantage of the intelligent mechanics. The grasps made available by the high-level controller include those fundamental for activities of daily living: cylindrical, spherical, tridigital (tripod), and lateral grasps. The modular and flexible design of the CyberHand makes it suitable for incremental development of sensorization, interfacing, and control strategies and, as such, it will be a useful tool not only for clinical research but also for addressing neuroscientific hypotheses regarding sensorimotor control.",
"title": ""
},
{
"docid": "209248c4cbcaebbe0e8c2465e46f4183",
"text": "With many advantageous features such as softness and better biocompatibility, flexible electronic device is a promising technology that can enable many emerging applications. However, most of the existing applications with flexible devices are sensors and drivers, while there is nearly no utilization aiming at complex computation, because the flexible devices have lower electron mobility, simple structure, and large process variation. In this paper, we propose an innovative method that enabled flexible devices to implement real-time and energy-efficient Difference-of-Gaussian, which illustrate feasibility and potentials for the flexible devices to achieve complicated real-time computation in future generation products.",
"title": ""
}
] | scidocsrr |
b22ae6719a5f4426add3827a12eeef7b | Shallow and Deep Convolutional Networks for Saliency Prediction | [
{
"docid": "a77eddf9436652d68093946fbe1d2ed0",
"text": "The Pascal Visual Object Classes (VOC) challenge consists of two components: (i) a publicly available dataset of images together with ground truth annotation and standardised evaluation software; and (ii) an annual competition and workshop. There are five challenges: classification, detection, segmentation, action classification, and person layout. In this paper we provide a review of the challenge from 2008–2012. The paper is intended for two audiences: algorithm designers, researchers who want to see what the state of the art is, as measured by performance on the VOC datasets, along with the limitations and weak points of the current generation of algorithms; and, challenge designers, who want to see what we as organisers have learnt from the process and our recommendations for the organisation of future challenges. To analyse the performance of submitted algorithms on the VOC datasets we introduce a number of novel evaluation methods: a bootstrapping method for determining whether differences in the performance of two algorithms are significant or not; a normalised average precision so that performance can be compared across classes with different proportions of positive instances; a clustering method for visualising the performance across multiple algorithms so that the hard and easy images can be identified; and the use of a joint classifier over the submitted algorithms in order to measure their complementarity and combined performance. We also analyse the community’s progress through time using the methods of Hoiem et al. (Proceedings of European Conference on Computer Vision, 2012) to identify the types of occurring errors. We conclude the paper with an appraisal of the aspects of the challenge that worked well, and those that could be improved in future challenges.",
"title": ""
},
{
"docid": "925d0a4b4b061816c540f2408ea593d1",
"text": "It is believed that eye movements in free-viewing of natural scenes are directed by both bottom-up visual saliency and top-down visual factors. In this paper, we propose a novel computational framework to simultaneously learn these two types of visual features from raw image data using a multiresolution convolutional neural network (Mr-CNN) for predicting eye fixations. The Mr-CNN is directly trained from image regions centered on fixation and non-fixation locations over multiple resolutions, using raw image pixels as inputs and eye fixation attributes as labels. Diverse top-down visual features can be learned in higher layers. Meanwhile bottom-up visual saliency can also be inferred via combining information over multiple resolutions. Finally, optimal integration of bottom-up and top-down cues can be learned in the last logistic regression layer to predict eye fixations. The proposed approach achieves state-of-the-art results over four publically available benchmark datasets, demonstrating the superiority of our work.",
"title": ""
}
] | [
{
"docid": "eca2bfe1b96489e155e19d02f65559d6",
"text": "• Oracle experiment: to understand how well these attributes, when used together, can explain persuasiveness, we train 3 linear SVM regressors, one for each component type, to score an arguments persuasiveness using gold attribute’s as features • Two human annotators who were both native speakers of English were first familiarized with the rubrics and definitions and then trained on five essays • 30 essays were doubly annotated for computing inter-annotator agreement • Each of the remaining essays was annotated by one of the annotators • Score/Class distributions by component type: Give me More Feedback: Annotating Argument Persusiveness and Related Attributes in Student Essays",
"title": ""
},
{
"docid": "4d3baff85c302b35038f35297a8cdf90",
"text": "Most speech recognition applications in use today rely heavily on confidence measure for making optimal decisions. In this paper, we aim to answer the question: what can be done to improve the quality of confidence measure if we cannot modify the speech recognition engine? The answer provided in this paper is a post-processing step called confidence calibration, which can be viewed as a special adaptation technique applied to confidence measure. Three confidence calibration methods have been developed in this work: the maximum entropy model with distribution constraints, the artificial neural network, and the deep belief network. We compare these approaches and demonstrate the importance of key features exploited: the generic confidence-score, the application-dependent word distribution, and the rule coverage ratio. We demonstrate the effectiveness of confidence calibration on a variety of tasks with significant normalized cross entropy increase and equal error rate reduction.",
"title": ""
},
{
"docid": "5ea42460dc2bdd2ebc2037e35e01dca9",
"text": "Mobile edge clouds (MECs) are small cloud-like infrastructures deployed in close proximity to users, allowing users to have seamless and low-latency access to cloud services. When users move across different locations, their service applications often need to be migrated to follow the user so that the benefit of MEC is maintained. In this paper, we propose a layered framework for migrating running applications that are encapsulated either in virtual machines (VMs) or containers. We evaluate the migration performance of various real applications under the proposed framework.",
"title": ""
},
{
"docid": "ad8762ae878b7e731b11ab6d67f9867d",
"text": "We describe a posterolateral transfibular neck approach to the proximal tibia. This approach was developed as an alternative to the anterolateral approach to the tibial plateau for the treatment of two fracture subtypes: depressed and split depressed fractures in which the comminution and depression are located in the posterior half of the lateral tibial condyle. These fractures have proved particularly difficult to reduce and adequately internally fix through an anterior or anterolateral approach. The approach described in this article exposes the posterolateral aspect of the tibial plateau between the posterior margin of the iliotibial band and the posterior cruciate ligament. The approach allows lateral buttressing of the lateral tibial plateau and may be combined with a simultaneous posteromedial and/or anteromedial approach to the tibial plateau. Critically, the proximal tibial soft tissue envelope and its blood supply are preserved. To date, we have used this approach either alone or in combination with a posteromedial approach for the successful reduction of tibial plateau fractures in eight patients. No complications related to this approach were documented, including no symptoms related to the common peroneal nerve, and all fractures and fibular neck osteotomies healed uneventfully.",
"title": ""
},
{
"docid": "c940cfa3a74cce2aed59640975b4b80d",
"text": "A novel ultra-wideband bandpass filter (BPF) is presented using a back-to-back microstrip-to-coplanar waveguide (CPW) transition employed as the broadband balun structure in this letter. The proposed BPF is based on the electromagnetic coupling between open-circuited microstrip line and short-circuited CPW. The equivalent circuit of half of the filter is used to calculate the input impedance. The broadband microstip-to-CPW transition is designed at the center frequency of 6.85 GHz. The simulated and measured results are shown in this letter.",
"title": ""
},
{
"docid": "cd3fbe507e685b3f62ebd5e5243ddb0b",
"text": "Changes in the background EEG activity occurring at the same time as visual and auditory evoked potentials, as well as during the interstimulus interval in a CNV paradigm were analysed in human subjects, using serial power measurements of overlapping EEG segments. The analysis was focused on the power of the rhythmic activity within the alpha band (RAAB power). A decrease in RAAB power occurring during these event-related phenomena was indicative of desynchronization. Phasic, i.e. short lasting, localised desynchronization was present during sensory stimulation, and also preceding the imperative signal and motor response (motor preactivation) in the CNV paradigm.",
"title": ""
},
{
"docid": "614cc9968370bffb32cf70f44c8f8688",
"text": "The abundance of event data in today’s information systems makes it possible to “confront” process models with the actual observed behavior. Process mining techniques use event logs to discover process models that describe the observed behavior, and to check conformance of process models by diagnosing deviations between models and reality. In many situations, it is desirable to mediate between a preexisting model and observed behavior. Hence, we would like to repair the model while improving the correspondence between model and log as much as possible. The approach presented in this article assigns predefined costs to repair actions (allowing inserting or skipping of activities). Given a maximum degree of change, we search for models that are optimal in terms of fitness—that is, the fraction of behavior in the log not possible according to the model is minimized. To compute fitness, we need to align the model and log, which can be time consuming. Hence, finding an optimal repair may be intractable. We propose different alternative approaches to speed up repair. The number of alignment computations can be reduced dramatically while still returning near-optimal repairs. The different approaches have been implemented using the process mining framework ProM and evaluated using real-life logs.",
"title": ""
},
{
"docid": "0d0f6e946bd9125f87a78d8cf137ba97",
"text": "Acute renal failure increases risk of death after cardiac surgery. However, it is not known whether more subtle changes in renal function might have an impact on outcome. Thus, the association between small serum creatinine changes after surgery and mortality, independent of other established perioperative risk indicators, was analyzed. In a prospective cohort study in 4118 patients who underwent cardiac and thoracic aortic surgery, the effect of changes in serum creatinine within 48 h postoperatively on 30-d mortality was analyzed. Cox regression was used to correct for various established demographic preoperative risk indicators, intraoperative parameters, and postoperative complications. In the 2441 patients in whom serum creatinine decreased, early mortality was 2.6% in contrast to 8.9% in patients with increased postoperative serum creatinine values. Patients with large decreases (DeltaCrea <-0.3 mg/dl) showed a progressively increasing 30-d mortality (16 of 199 [8%]). Mortality was lowest (47 of 2195 [2.1%]) in patients in whom serum creatinine decreased to a maximum of -0.3 mg/dl; mortality increased to 6% in patients in whom serum creatinine remained unchanged or increased up to 0.5 mg/dl. Mortality (65 of 200 [32.5%]) was highest in patients in whom creatinine increased > or =0.5 mg/dl. For all groups, increases in mortality remained significant in multivariate analyses, including postoperative renal replacement therapy. After cardiac and thoracic aortic surgery, 30-d mortality was lowest in patients with a slight postoperative decrease in serum creatinine. Any even minimal increase or profound decrease of serum creatinine was associated with a substantial decrease in survival.",
"title": ""
},
{
"docid": "717d1c31ac6766fcebb4ee04ca8aa40f",
"text": "We present an incremental maintenance algorithm for leapfrog triejoin. The algorithm maintains rules in time proportional (modulo log factors) to the edit distance between leapfrog triejoin traces.",
"title": ""
},
{
"docid": "cc57f21666ece3c6ba7c9a28228a44c1",
"text": "The past few years have seen rapid advances in communication and information technology (C&IT), and the pervasion of the worldwide web into everyday life has important implications for education. Most medical schools provide extensive computer networks for their students, and these are increasingly becoming a central component of the learning and teaching environment. Such advances bring new opportunities and challenges to medical education, and are having an impact on the way that we teach and on the way that students learn, and on the very design and delivery of the curriculum. The plethora of information available on the web is overwhelming, and both students and staff need to be taught how to manage it effectively. Medical schools must develop clear strategies to address the issues raised by these technologies. We describe how medical schools are rising to this challenge, look at some of the ways in which communication and information technology can be used to enhance the learning and teaching environment, and discuss the potential impact of future developments on medical education.",
"title": ""
},
{
"docid": "15dc2cd497f782d16311cd0e658e2e90",
"text": "We present a novel view of the structuring of distributed systems, and a few examples of its utilization in an object-oriented context. In a distributed system, the structure of a service or subsystem may be complex, being implemented as a set of communicating server objects; however, this complexity of structure should not be apparent to the client. In our proposal, a client must first acquire a local object, called a proxy, in order to use such a service. The proxy represents the whole set of servers. The client directs all its communication to the proxy. The proxy, and all the objects it represents, collectively form one distributed object, which is not decomposable by the client. Any higher-level communication protocols are internal to this distributed object. Such a view provides a powerful structuring framework for distributed systems; it can be implemented cheaply without sacrificing much flexibility. It subsumes may previous proposals, but encourages better information-hiding and encapsulation.",
"title": ""
},
{
"docid": "2bd53f469a81d2c1ef17c239761a5758",
"text": "This paper addresses the stability problem of a class of delayed neural networks with time-varying impulses. One important feature of the time-varying impulses is that both the stabilizing and destabilizing impulses are considered simultaneously. Based on the comparison principle, the stability of delayed neural networks with time-varying impulses is investigated. Finally, the simulation results demonstrate the effectiveness of the results.",
"title": ""
},
{
"docid": "01eadabcfbe9274c47d9ebcd45ea2332",
"text": "The classical uncertainty principle provides a fundamental tradeoff in the localization of a signal in the time and frequency domains. In this paper we describe a similar tradeoff for signals defined on graphs. We describe the notions of “spread” in the graph and spectral domains, using the eigenvectors of the graph Laplacian as a surrogate Fourier basis. We then describe how to find signals that, among all signals with the same spectral spread, have the smallest graph spread about a given vertex. For every possible spectral spread, the desired signal is the solution to an eigenvalue problem. Since localization in graph and spectral domains is a desirable property of the elements of wavelet frames on graphs, we compare the performance of some existing wavelet transforms to the obtained bound.",
"title": ""
},
{
"docid": "ebc57f065fa7f3206564ff14539b0707",
"text": "Following the Daubert ruling in 1993, forensic evidence based on fingerprints was first challenged in the 1999 case of the U.S. versus Byron C. Mitchell and, subsequently, in 20 other cases involving fingerprint evidence. The main concern with the admissibility of fingerprint evidence is the problem of individualization, namely, that the fundamental premise for asserting the uniqueness of fingerprints has not been objectively tested and matching error rates are unknown. In order to assess the error rates, we require quantifying the variability of fingerprint features, namely, minutiae in the target population. A family of finite mixture models has been developed in this paper to represent the distribution of minutiae in fingerprint images, including minutiae clustering tendencies and dependencies in different regions of the fingerprint image domain. A mathematical model that computes the probability of a random correspondence (PRC) is derived based on the mixture models. A PRC of 2.25 times10-6 corresponding to 12 minutiae matches was computed for the NIST4 Special Database, when the numbers of query and template minutiae both equal 46. This is also the estimate of the PRC for a target population with a similar composition as that of NIST4.",
"title": ""
},
{
"docid": "147719cdac405333d8f8c2b7558be472",
"text": "OBJECTIVES\nBiliary injuries are frequently accompanied by vascular injuries, which may worsen the bile duct injury and cause liver ischemia. We performed an analytical review with the aim of defining vasculobiliary injury and setting out the important issues in this area.\n\n\nMETHODS\nA literature search of relevant terms was performed using OvidSP. Bibliographies of papers were also searched to obtain older literature.\n\n\nRESULTS\n Vasculobiliary injury was defined as: an injury to both a bile duct and a hepatic artery and/or portal vein; the bile duct injury may be caused by operative trauma, be ischaemic in origin or both, and may or may not be accompanied by various degrees of hepatic ischaemia. Right hepatic artery (RHA) vasculobiliary injury (VBI) is the most common variant. Injury to the RHA likely extends the biliary injury to a higher level than the gross observed mechanical injury. VBI results in slow hepatic infarction in about 10% of patients. Repair of the artery is rarely possible and the overall benefit unclear. Injuries involving the portal vein or common or proper hepatic arteries are much less common, but have more serious effects including rapid infarction of the liver.\n\n\nCONCLUSIONS\nRoutine arteriography is recommended in patients with a biliary injury if early repair is contemplated. Consideration should be given to delaying repair of a biliary injury in patients with occlusion of the RHA. Patients with injuries to the portal vein or proper or common hepatic should be emergently referred to tertiary care centers.",
"title": ""
},
{
"docid": "c7fd5a26da59fab4e66e0cb3e93530d6",
"text": "Switching audio amplifiers are widely used in HBridge topology thanks to their high efficiency; however low audio performances in single ended power stage topology is a strong weakness leading to not be used for headset applications. This paper explains the importance of efficient error correction in Single Ended Class-D audio amplifier. A hysteresis control for Class-D amplifier with a variable window is also presented. The analyses are verified by simulations and measurements. The proposed solution was fabricated in 0.13µm CMOS technology with an active area of 0.2mm2. It could be used in single ended output configuration fully compatible with common headset connectors. The proposed Class-D amplifier achieves a harmonic distortion of 0.01% and a power supply rejection of 70dB with a quite low static current consumption.",
"title": ""
},
{
"docid": "526707cbd0083267c4d84808aa206d8a",
"text": "The research of probiotics for aquatic animals is increasing with the demand for environmentfriendly aquaculture. The probiotics were defined as live microbial feed supplements that improve health of man and terrestrial livestock. The gastrointestinal microbiota of fish and shellfish are peculiarly dependent on the external environment, due to the water flow passing through the digestive tract. Most bacterial cells are transient in the gut, with continuous intrusion of microbes coming from water and food. Some commercial products are referred to as probiotics, though they were designed to treat the rearing medium, not to supplement the diet. This extension of the probiotic concept is pertinent when the administered microbes survive in the gastrointestinal tract. Otherwise, more general terms are suggested, like biocontrol when the treatment is antagonistic to pathogens, or bioremediation when water quality is improved. However, the first probiotics tested in fish were commercial preparations devised for land animals. Though some effects were observed with such preparations, the survival of these bacteria was uncertain in aquatic environment. Most attempts to propose probiotics have been undertaken by isolating and selecting strains from aquatic environment. These microbes were Vibrionaceae, pseudomonads, lactic acid bacteria, Bacillus spp. and yeasts. Three main characteristics have been searched in microbes as candidates Ž . to improve the health of their host. 1 The antagonism to pathogens was shown in vitro in most Ž . Ž . cases. 2 The colonization potential of some candidate probionts was also studied. 3 Challenge tests confirmed that some strains could increase the resistance to disease of their host. Many other beneficial effects may be expected from probiotics, e.g., competition with pathogens for nutrients or for adhesion sites, and stimulation of the immune system. The most promising prospects are sketched out, but considerable efforts of research will be necessary to develop the applications to aquaculture. q 1999 Elsevier Science B.V. All rights reserved.",
"title": ""
},
{
"docid": "8d0c5de2054b7c6b4ef97a211febf1d0",
"text": "This paper revisits the problem of optimal learning and decision-making when different misclassification errors incur different penalties. We characterize precisely but intuitively when a cost matrix is reasonable, and we show how to avoid the mistake of defining a cost matrix that is economically incoherent. For the two-class case, we prove a theorem that shows how to change the proportion of negative examples in a training set in order to make optimal cost-sensitive classification decisions using a classifier learned by a standard non-costsensitive learning method. However, we then argue that changing the balance of negative and positive training examples has little effect on the classifiers produced by standard Bayesian and decision tree learning methods. Accordingly, the recommended way of applying one of these methods in a domain with differing misclassification costs is to learn a classifier from the training set as given, and then to compute optimal decisions explicitly using the probability estimates given by the classifier. 1 Making decisions based on a cost matrix Given a specification of costs for correct and incorrect predictions, an example should be predicted to have the class that leads to the lowest expected cost, where the expectatio n is computed using the conditional probability of each class given the example. Mathematically, let the (i; j) entry in a cost matrixC be the cost of predicting class i when the true class isj. If i = j then the prediction is correct, while if i 6= j the prediction is incorrect. The optimal prediction for an examplex is the classi that minimizes L(x; i) =Xj P (jjx)C(i; j): (1) Costs are not necessarily monetary. A cost can also be a waste of time, or the severity of an illness, for example. For eachi,L(x; i) is a sum over the alternative possibilities for the true class of x. In this framework, the role of a learning algorithm is to produce a classifier that for any example x can estimate the probability P (jjx) of each classj being the true class ofx. For an examplex, making the predictioni means acting as if is the true class of x. The essence of cost-sensitive decision-making is that it can be optimal to ct as if one class is true even when some other class is more probable. For example, it can be rational not to approve a large credit card transaction even if the transaction is mos t likely legitimate. 1.1 Cost matrix properties A cost matrixC always has the following structure when there are only two classes: actual negative actual positive predict negative C(0; 0) = 00 C(0; 1) = 01 predict positive C(1; 0) = 10 C(1; 1) = 11 Recent papers have followed the convention that cost matrix rows correspond to alternative predicted classes, whi le columns correspond to actual classes, i.e. row/column = i/j predicted/actual. In our notation, the cost of a false positive is 10 while the cost of a false negative is 01. Conceptually, the cost of labeling an example incorrectly should always be greater th an the cost of labeling it correctly. Mathematically, it shoul d always be the case that 10 > 00 and 01 > 11. We call these conditions the “reasonableness” conditions. Suppose that the first reasonableness condition is violated , so 00 10 but still 01 > 11. In this case the optimal policy is to label all examples positive. Similarly, if 10 > 00 but 11 01 then it is optimal to label all examples negative. We leave the case where both reasonableness conditions are violated for the reader to analyze. Margineantu[2000] has pointed out that for some cost matrices, some class labels are never predicted by the optimal policy as given by Equation (1). We can state a simple, intuitive criterion for when this happens. Say that row m dominates rown in a cost matrixC if for all j,C(m; j) C(n; j). In this case the cost of predictingis no greater than the cost of predictingm, regardless of what the true class j is. So it is optimal never to predict m. As a special case, the optimal prediction is alwaysn if row n is dominated by all other rows in a cost matrix. The two reasonableness conditions for a two-class cost matrix imply that neither row in the matrix dominates the other. Given a cost matrix, the decisions that are optimal are unchanged if each entry in the matrix is multiplied by a positiv e constant. This scaling corresponds to changing the unit of account for costs. Similarly, the decisions that are optima l are unchanged if a constant is added to each entry in the matrix. This shifting corresponds to changing the baseline aw ay from which costs are measured. By scaling and shifting entries, any two-class cost matrix that satisfies the reasonab leness conditions can be transformed into a simpler matrix tha t always leads to the same decisions:",
"title": ""
},
{
"docid": "52e29410e4115f411407bcbd96a17ad0",
"text": "Empirical methods in geoparsing have thus far lacked a standard evaluation framework describing the task, data and metrics used to establish state-of-the-art systems. Evaluation is further made inconsistent, even unrepresentative of real world usage, by the lack of distinction between the different types of toponyms, which necessitates new guidelines, a consolidation of metrics and a detailed toponym taxonomy with implications for Named Entity Recognition (NER). To address these deficiencies, our manuscript introduces such a framework in three parts. Part 1) Task Definition: clarified via corpus linguistic analysis proposing a fine-grained Pragmatic Taxonomy of Toponyms with new guidelines. Part 2) Evaluation Data: shared via a dataset called GeoWebNews to provide test/train data to enable immediate use of our contributions. In addition to fine-grained Geotagging and Toponym Resolution (Geocoding), this dataset is also suitable for prototyping machine learning NLP models. Part 3) Metrics: discussed and reviewed for a rigorous evaluation with appropriate recommendations for NER/Geoparsing practitioners. We gratefully acknowledge the funding support of the Natural Environment Research Council (NERC) PhD Studentship (Milan Gritta NE/M009009/1), EPSRC (Nigel Collier EP/M005089/1) and MRC (Mohammad Taher Pilehvar MR/M025160/1 for PheneBank). We also acknowledge Cambridge University linguists Mina Frost and Qianchu (Flora) Liu for providing expertise and verification (IAA) during dataset construction/annotation. Milan Gritta E-mail: [email protected] Mohammad Taher Pilehvar E-mail: [email protected] Nigel Collier E-mail: [email protected] Language Technology Lab (LTL) Department of Theoretical and Applied Linguistics (DTAL) University of Cambridge, 9 West Road, Cambridge CB3 9DP ar X iv :1 81 0. 12 36 8v 2 [ cs .C L ] 2 N ov 2 01 8 2 Milan Gritta et al.",
"title": ""
},
{
"docid": "98a820c806b392e18b38d075b91a4fe9",
"text": "This paper presents a scalable method to efficiently search for the most likely state trajectory leading to an event given only a simulator of a system. Our approach uses a reinforcement learning formulation and solves it using Monte Carlo Tree Search (MCTS). The approach places very few requirements on the underlying system, requiring only that the simulator provide some basic controls, the ability to evaluate certain conditions, and a mechanism to control the stochasticity in the system. Access to the system state is not required, allowing the method to support systems with hidden state. The method is applied to stress test a prototype aircraft collision avoidance system to identify trajectories that are likely to lead to near mid-air collisions. We present results for both single and multi-threat encounters and discuss their relevance. Compared with direct Monte Carlo search, this MCTS method performs significantly better both in finding events and in maximizing their likelihood.",
"title": ""
}
] | scidocsrr |
aa5e9d637561714872ee658816d8e0aa | Discourse Marker Augmented Network with Reinforcement Learning for Natural Language Inference | [
{
"docid": "d3997f030d5d7287a4c6557681dc7a46",
"text": "This paper presents the first use of a computational model of natural logic—a system of logical inference which operates over natural language—for textual inference. Most current approaches to the PASCAL RTE textual inference task achieve robustness by sacrificing semantic precision; while broadly effective, they are easily confounded by ubiquitous inferences involving monotonicity. At the other extreme, systems which rely on first-order logic and theorem proving are precise, but excessively brittle. This work aims at a middle way. Our system finds a low-cost edit sequence which transforms the premise into the hypothesis; learns to classify entailment relations across atomic edits; and composes atomic entailments into a top-level entailment judgment. We provide the first reported results for any system on the FraCaS test suite. We also evaluate on RTE3 data, and show that hybridizing an existing RTE system with our natural logic system yields significant performance gains.",
"title": ""
},
{
"docid": "d8eee79312660f4da03a29372fc87d7e",
"text": "Previous work combines word-level and character-level representations using concatenation or scalar weighting, which is suboptimal for high-level tasks like reading comprehension. We present a fine-grained gating mechanism to dynamically combine word-level and character-level representations based on properties of the words. We also extend the idea of fine-grained gating to modeling the interaction between questions and paragraphs for reading comprehension. Experiments show that our approach can improve the performance on reading comprehension tasks, achieving new state-of-the-art results on the Children’s Book Test and Who Did What datasets. To demonstrate the generality of our gating mechanism, we also show improved results on a social media tag prediction task.1",
"title": ""
},
{
"docid": "a986826041730d953dfbf9fbc1b115a6",
"text": "This article presents a general class of associative reinforcement learning algorithms for connectionist networks containing stochastic units. These algorithms, called REINFORCE algorithms, are shown to make weight adjustments in a direction that lies along the gradient of expected reinforcement in both immediate-reinforcement tasks and certain limited forms of delayed-reinforcement tasks, and they do this without explicitly computing gradient estimates or even storing information from which such estimates could be computed. Specific examples of such algorithms are presented, some of which bear a close relationship to certain existing algorithms while others are novel but potentially interesting in their own right. Also given are results that show how such algorithms can be naturally integrated with backpropagation. We close with a brief discussion of a number of additional issues surrounding the use of such algorithms, including what is known about their limiting behaviors as well as further considerations that might be used to help develop similar but potentially more powerful reinforcement learning algorithms.",
"title": ""
}
] | [
{
"docid": "fd2450f5b02a2599be29b90a599ad31d",
"text": "Male genital injuries, demand prompt management to prevent long-term sexual and psychological damage. Injuries to the scrotum and contents may produce impaired fertility.We report our experience in diagnosing and managing a case of a foreign body in the scrotum following a boat engine blast accident. This case report highlights the need for a good history and thorough general examination to establish the mechanism of injury in order to distinguish between an embedded penetrating projectile injury and an injury with an exit wound. Prompt surgical exploration with hematoma evacuation limits complications.",
"title": ""
},
{
"docid": "11434fe02e1e810a85dd8b27747b0af6",
"text": "A model free auto tuning algorithm is developed by using simultaneous perturbation stochastic approximation (SPSA). For such a method, plant models are not required. A set of closed loop experiments are conducted to generate data for an online optimization procedure. The optimum of the parameters of the restricted structured controllers will be found via SPSA algorithm. Compared to the conventional gradient approximation methods, SPSA only needs the small number of measurement of the cost function. It will be beneficial to application with high dimensional parameters. In the paper, a cost function is formulated to directly reflect the control performances widely used in industry, like overshoot, settling time and integral of absolute error. Therefore, the proposed auto tuning method will naturally lead to the desired closed loop performance. A case study of auto tuning of spool position control in a twin spool two stage valve is conducted. Both simulation and experimental study in TI C2000 target demonstrate effectiveness of the algorithm.",
"title": ""
},
{
"docid": "0b51889817aca2afd7c1c754aa47f7de",
"text": "OBJECTIVE\nThis study aims to compare how national guidelines approach the management of obesity in reproductive age women.\n\n\nSTUDY DESIGN\nWe conducted a search for national guidelines in the English language on the topic of obesity surrounding the time of a pregnancy. We identified six primary source documents and several secondary source documents from five countries. Each document was then reviewed to identify: (1) statements acknowledging increased health risks related to obesity and reproductive outcomes, (2) recommendations for the management of obesity before, during, or after pregnancy.\n\n\nRESULTS\nAll guidelines cited an increased risk for miscarriage, birth defects, gestational diabetes, hypertension, fetal growth abnormalities, cesarean sections, difficulty with anesthesia, postpartum hemorrhage, and obesity in offspring. Counseling on the risks of obesity and weight loss before pregnancy were universal recommendations. There were substantial differences in the recommendations pertaining to gestational weight gain goals, nutrient and vitamin supplements, screening for gestational diabetes, and thromboprophylaxis among the guidelines.\n\n\nCONCLUSION\nStronger evidence from randomized trials is needed to devise consistent recommendations for obese reproductive age women. This research may also assist clinicians in overcoming one of the many obstacles they encounter when providing care to obese women.",
"title": ""
},
{
"docid": "607977a85696ecc91816cd9f2cf04bbf",
"text": "the paper presents a model integrating theories from collaboration research (i.e., social presence theory, channel expansion theory, and the task closure model) with a recent theory from technology adoption research (i.e., unified theory of acceptance and use of technology, abbreviated to utaut) to explain the adoption and use of collaboration technology. we theorize that collaboration technology characteristics, individual and group characteristics, task characteristics, and situational characteristics are predictors of performance expectancy, effort expectancy, social influence, and facilitating conditions in utaut. we further theorize that the utaut constructs, in concert with gender, age, and experience, predict intention to use a collaboration technology, which in turn predicts use. we conducted two field studies in Finland among (1) 349 short message service (SMS) users and (2) 447 employees who were potential users of a new collaboration technology in an organization. Our model was supported in both studies. the current work contributes to research by developing and testing a technology-specific model of adoption in the collaboration context. key worDS anD phraSeS: channel expansion theory, collaboration technologies, social presence theory, task closure model, technology acceptance, technology adoption, unified theory of acceptance and use of technology. technology aDoption iS one of the moSt mature StreamS in information systems (IS) research (see [65, 76, 77]). the benefit of such maturity is the availability of frameworks and models that can be applied to the study of interesting problems. while practical contributions are certain to accrue from such investigations, a key challenge for researchers is to ensure that studies yield meaningful scientific contributions. there have been several models explaining technology adoption and use, particularly since the late 1980s [76]. In addition to noting the maturity of this stream of research, Venkatesh et al. identified several important directions for future research and suggested that “one of the most important directions for future research is to tie this mature stream [technology adoption] of research into other established streams of work” [76, p. 470] (see also [70]). In research on technology adoption, the technology acceptance model (taM) [17] is the most widely employed theoretical model [76]. taM has been applied to a range of technologies and has been very predictive of individual technology adoption and use. the unified theory of acceptance and use of technology (utaut) [76] integrated eight distinct models of technology adoption and use, including taM. utaut extends taM by incorporating social influence and facilitating conditions. utaut is based in PrEDICtING COllaBOratION tEChNOlOGY uSE 11 the rich tradition of taM and provides a foundation for future research in technology adoption. utaut also incorporates four different moderators of key relationships. although utaut is more integrative, like taM, it still suffers from the limitation of being predictive but not particularly useful in providing explanations that can be used to design interventions that foster adoption (e.g., [72, 73]). there has been some research on general antecedents of perceived usefulness and perceived ease of use that are technology independent (e.g., [69, 73]). But far less attention has been paid to technology-specific antecedents that may provide significantly stronger guidance for the successful design and implementation of specific types of systems. Developing theory that is more focused and context specific—here, technology specific—is considered an important frontier for advances in IS research [53, 70]. Building on utaut to develop a model that will be more helpful will require a better understanding of how the utaut factors play out with different technologies [7, 76]. as a first step, it is important to extend utaut to a specific class of technologies [70, 76]. a model focused on a specific class of technology will be more explanatory compared to a general model that attempts to address many classes of technologies [70]. Such a focused model will also provide designers and managers with levers to augment adoption and use. One example is collaboration technology [20], a technology designed to assist two or more people to work together at the same place and time or at different places or different times [25, 26]. technologies that facilitate collaboration via electronic means have become an important component of day-to-day life (both in and out of the workplace). thus, it is not surprising that collaboration technologies have received considerable research attention over the past decades [24, 26, 77]. Several studies have examined the adoption of collaboration technologies, such as voice mail, e-mail, and group support systems (e.g., [3, 4, 44, 56, 63]). these studies focused on organizational factors leading to adoption (e.g., size, centralization) or on testing the boundary conditions of taM (e.g., could taM be applied to collaboration technologies). Given that adoption of collaboration technologies is not progressing as fast or as broadly as expected [20, 54], it seems a different approach is needed. It is possible that these two streams could inform each other to develop a more complete understanding of collaboration technology use, one in which we can begin to understand how collaboration factors influence adoption and use. a model that integrates knowledge from technology adoption and collaboration technology research is lacking, a void that this paper seeks to address. In doing so, we answer the call for research by Venkatesh et al. [76] to integrate the technology adoption stream with another dominant research stream, which in turn will move us toward a more cumulative and expansive nomological network (see [41, 70]). we also build on the work of wixom and todd [80] by examining the important role of technology characteristics leading to use. the current study will help us take a step toward alleviating one of the criticisms of IS research discussed by Benbasat and Zmud, especially in the context of technology adoption research: “we should neither focus our research on variables outside the nomological net nor exclusively on intermediate-level variables, such as ease of use, usefulness or behavioral intentions, without clarifying 12 BrOwN, DENNIS, aND VENkatESh the IS nuances involved” [6, p. 193]. Specifically, our work accomplishes the goal of “developing conceptualizations and theories of It [information technology] artifacts; and incorporating such conceptualizations and theories of It artifacts” [53, p. 130] by extending utaut to incorporate the specific artifact of collaboration technology and its related characteristics. In addition to the scientific value, such a model will provide greater value to practitioners who are attempting to foster successful use of a specific technology. Given this background, the primary objective of this paper is to develop and test a model to understand collaboration technology adoption that integrates utaut with key constructs from theories about collaboration technologies. we identify specific antecedents to utaut constructs by drawing from social presence theory [64], channel expansion theory [11] (a descendant of media richness theory [16]), and the task closure model [66], as well as a broad range of prior collaboration technology research. we test our model in two different studies conducted in Finland: the use of short message service (SMS) among working professionals and the use of a collaboration technology in an organization.",
"title": ""
},
{
"docid": "c12fb39060ec4dd2c7bb447352ea4e8a",
"text": "Lots of data from different domains is published as Linked Open Data (LOD). While there are quite a few browsers for such data, as well as intelligent tools for particular purposes, a versatile tool for deriving additional knowledge by mining the Web of Linked Data is still missing. In this system paper, we introduce the RapidMiner Linked Open Data extension. The extension hooks into the powerful data mining and analysis platform RapidMiner, and offers operators for accessing Linked Open Data in RapidMiner, allowing for using it in sophisticated data analysis workflows without the need for expert knowledge in SPARQL or RDF. The extension allows for autonomously exploring the Web of Data by following links, thereby discovering relevant datasets on the fly, as well as for integrating overlapping data found in different datasets. As an example, we show how statistical data from the World Bank on scientific publications, published as an RDF data cube, can be automatically linked to further datasets and analyzed using additional background knowledge from ten different LOD datasets.",
"title": ""
},
{
"docid": "b62da3e709d2bd2c7605f3d0463eff2f",
"text": "This study examines the economic effect of information security breaches reported in newspapers on publicly traded US corporations. We find limited evidence of an overall negative stock market reaction to public announcements of information security breaches. However, further investigation reveals that the nature of the breach affects this result. We find a highly significant negative market reaction for information security breaches involving unauthorized access to confidential data, but no significant reaction when the breach does not involve confidential information. Thus, stock market participants appear to discriminate across types of breaches when assessing their economic impact on affected firms. These findings are consistent with the argument that the economic consequences of information security breaches vary according to the nature of the underlying assets affected by the breach.",
"title": ""
},
{
"docid": "a79c9ee27a13b35c1d6710cf9a1ee9cf",
"text": "We present a new end-to-end network architecture for facial expression recognition with an attention model. It focuses attention in the human face and uses a Gaussian space representation for expression recognition. We devise this architecture based on two fundamental complementary components: (1) facial image correction and attention and (2) facial expression representation and classification. The first component uses an encoder-decoder style network and a convolutional feature extractor that are pixel-wise multiplied to obtain a feature attention map. The second component is responsible for obtaining an embedded representation and classification of the facial expression. We propose a loss function that creates a Gaussian structure on the representation space. To demonstrate the proposed method, we create two larger and more comprehensive synthetic datasets using the traditional BU3DFE and CK+ facial datasets. We compared results with the PreActResNet18 baseline. Our experiments on these datasets have shown the superiority of our approach in recognizing facial expressions.",
"title": ""
},
{
"docid": "8de09be7888299dc5dd30bbeb5578c35",
"text": "Scene text detection is challenging as the input may have different orientations, sizes, font styles, lighting conditions, perspective distortions and languages. This paper addresses the problem by designing a Rotational Region CNN (R2CNN). R2CNN includes a Text Region Proposal Network (Text-RPN) to estimate approximate text regions and a multitask refinement network to get the precise inclined box. Our work has the following features. First, we use a novel multi-task regression method to support arbitrarily-oriented scene text detection. Second, we introduce multiple ROIPoolings to address the scene text detection problem for the first time. Third, we use an inclined Non-Maximum Suppression (NMS) to post-process the detection candidates. Experiments show that our method outperforms the state-of-the-art on standard benchmarks: ICDAR 2013, ICDAR 2015, COCO-Text and MSRA-TD500.",
"title": ""
},
{
"docid": "fbe0c6e8cbaf6c419990c1a7093fe2a9",
"text": "Deep learning is quickly becoming the leading methodology for medical image analysis. Given a large medical archive, where each image is associated with a diagnosis, efficient pathology detectors or classifiers can be trained with virtually no expert knowledge about the target pathologies. However, deep learning algorithms, including the popular ConvNets, are black boxes: little is known about the local patterns analyzed by ConvNets to make a decision at the image level. A solution is proposed in this paper to create heatmaps showing which pixels in images play a role in the image-level predictions. In other words, a ConvNet trained for image-level classification can be used to detect lesions as well. A generalization of the backpropagation method is proposed in order to train ConvNets that produce high-quality heatmaps. The proposed solution is applied to diabetic retinopathy (DR) screening in a dataset of almost 90,000 fundus photographs from the 2015 Kaggle Diabetic Retinopathy competition and a private dataset of almost 110,000 photographs (e-ophtha). For the task of detecting referable DR, very good detection performance was achieved: Az=0.954 in Kaggle's dataset and Az=0.949 in e-ophtha. Performance was also evaluated at the image level and at the lesion level in the DiaretDB1 dataset, where four types of lesions are manually segmented: microaneurysms, hemorrhages, exudates and cotton-wool spots. For the task of detecting images containing these four lesion types, the proposed detector, which was trained to detect referable DR, outperforms recent algorithms trained to detect those lesions specifically, with pixel-level supervision. At the lesion level, the proposed detector outperforms heatmap generation algorithms for ConvNets. This detector is part of the Messidor® system for mobile eye pathology screening. Because it does not rely on expert knowledge or manual segmentation for detecting relevant patterns, the proposed solution is a promising image mining tool, which has the potential to discover new biomarkers in images.",
"title": ""
},
{
"docid": "d2e3b893e257d04da0cccbd4b1def9f7",
"text": "Augmented reality (AR) is currently considered as having potential for pedagogical applications. However, in science education, research regarding AR-aided learning is in its infancy. To understand how AR could help science learning, this review paper firstly has identified two major approaches of utilizing AR technology in science education, which are named as image-based AR and locationbased AR. These approaches may result in different affordances for science learning. It is then found that students’ spatial ability, practical skills, and conceptual understanding are often afforded by image-based AR and location-based AR usually supports inquiry-based scientific activities. After examining what has been done in science learning with AR supports, several suggestions for future research are proposed. For example, more research is required to explore learning experience (e.g., motivation or cognitive load) and learner characteristics (e.g., spatial ability or perceived presence) involved in AR. Mixed methods of investigating learning process (e.g., a content analysis and a sequential analysis) and in-depth examination of user experience beyond usability (e.g., affective variables of esthetic pleasure or emotional fulfillment) should be considered. Combining image-based and location-based AR technology may bring new possibility for supporting science learning. Theories including mental models, spatial cognition, situated cognition, and social constructivist learning are suggested for the profitable uses of future AR research in science education.",
"title": ""
},
{
"docid": "eaa37c0420dbc804eaf480d1167ad201",
"text": "This paper focuses on the problem of object detection when the annotation at training time is restricted to presence or absence of object instances at image level. We present a method based on features extracted from a Convolutional Neural Network and latent SVM that can represent and exploit the presence of multiple object instances in an image. Moreover, the detection of the object instances in the image is improved by incorporating in the learning procedure additional constraints that represent domain-specific knowledge such as symmetry and mutual exclusion. We show that the proposed method outperforms the state-of-the-art in weakly-supervised object detection and object classification on the Pascal VOC 2007 dataset.",
"title": ""
},
{
"docid": "eecc4c73eb7f784b7f03923f14d50224",
"text": "Gated-Attention (GA) Reader has been effective for reading comprehension. GA Reader makes two assumptions: (1) a uni-directional attention that uses an input query to gate token encodings of a document; (2) encoding at the cloze position of an input query is considered for answer prediction. In this paper, we propose Collaborative Gating (CG) and Self-Belief Aggregation (SBA) to address the above assumptions respectively. In CG, we first use an input document to gate token encodings of an input query so that the influence of irrelevant query tokens may be reduced. Then the filtered query is used to gate token encodings of an document in a collaborative fashion. In SBA, we conjecture that query tokens other than the cloze token may be informative for answer prediction. We apply self-attention to link the cloze token with other tokens in a query so that the importance of query tokens with respect to the cloze position are weighted. Then their evidences are weighted, propagated and aggregated for better reading comprehension. Experiments show that our approaches advance the state-of-theart results in CNN, Daily Mail, and Who Did What public test sets.",
"title": ""
},
{
"docid": "99e1ae882a1b74ffcbe5e021eb577e49",
"text": "This paper studies the problem of recognizing gender from full body images. This problem has not been addressed before, partly because of the variant nature of human bodies and clothing that can bring tough difficulties. However, gender recognition has high application potentials, e.g. security surveillance and customer statistics collection in restaurants, supermarkets, and even building entrances. In this paper, we build a system of recognizing gender from full body images, taken from frontal or back views. Our contributions are three-fold. First, to handle the variety of human body characteristics, we represent each image by a collection of patch features, which model different body parts and provide a set of clues for gender recognition. To combine the clues, we build an ensemble learning algorithm from those body parts to recognize gender from fixed view body images (frontal or back). Second, we relax the fixed view constraint and show the possibility to train a flexible classifier for mixed view images with the almost same accuracy as the fixed view case. At last, our approach is shown to be robust to small alignment errors, which is preferred in many applications.",
"title": ""
},
{
"docid": "0cd1400bce31ea35b3f142339737dc28",
"text": "LLC resonant converter is a nonlinear system, limiting the use of typical linear control methods. This paper proposed a new nonlinear control strategy, using load feedback linearization for an LLC resonant converter. Compared with the conventional PI controllers, the proposed feedback linearized control strategy can achieve better performance with elimination of the nonlinear characteristics. The LLC resonant converter's dynamic model is built based on fundamental harmonic approximation using extended describing function. By assuming the dynamics of resonant network is much faster than the output voltage and controller, the LLC resonant converter's model is simplified from seven-order state equations to two-order ones. Then, the feedback linearized control strategy is presented. A double loop PI controller is designed to regulate the modulation voltage. The switching frequency can be calculated as a function of the load, input voltage, and modulation voltage. Finally, a 200 W laboratory prototype is built to verify the proposed control scheme. The settling time of the LLC resonant converter is reduced from 38.8 to 20.4 ms under the positive load step using the proposed controller. Experimental results prove the superiority of the proposed feedback linearized controller over the conventional PI controller.",
"title": ""
},
{
"docid": "3ae5e7ac5433f2449cd893e49f1b2553",
"text": "We propose a category-independent method to produce a bag of regions and rank them, such that top-ranked regions are likely to be good segmentations of different objects. Our key objectives are completeness and diversity: Every object should have at least one good proposed region, and a diverse set should be top-ranked. Our approach is to generate a set of segmentations by performing graph cuts based on a seed region and a learned affinity function. Then, the regions are ranked using structured learning based on various cues. Our experiments on the Berkeley Segmentation Data Set and Pascal VOC 2011 demonstrate our ability to find most objects within a small bag of proposed regions.",
"title": ""
},
{
"docid": "c796bc689e9b3e2b8d03525e5cd5908c",
"text": "As they grapple with increasingly large data sets, biologists and computer scientists uncork new bottlenecks. B iologists are joining the big-data club. With the advent of high-throughput genomics, life scientists are starting to grapple with massive data sets, encountering challenges with handling, processing and moving information that were once the domain of astronomers and high-energy physicists 1. With every passing year, they turn more often to big data to probe everything from the regulation of genes and the evolution of genomes to why coastal algae bloom, what microbes dwell where in human body cavities and how the genetic make-up of different cancers influences how cancer patients fare 2. The European Bioinformatics Institute (EBI) in Hinxton, UK, part of the European Molecular Biology Laboratory and one of the world's largest biology-data repositories, currently stores 20 petabytes (1 petabyte is 10 15 bytes) of data and backups about genes, proteins and small molecules. Genomic data account for 2 peta-bytes of that, a number that more than doubles every year 3 (see 'Data explosion'). This data pile is just one-tenth the size of the data store at CERN, Europe's particle-physics laboratory near Geneva, Switzerland. Every year, particle-collision events in CERN's Large Hadron Collider generate around 15 petabytes of data — the equivalent of about 4 million high-definition feature-length films. But the EBI and institutes like it face similar data-wrangling challenges to those at CERN, says Ewan Birney, associate director of the EBI. He and his colleagues now regularly meet with organizations such as CERN and the European Space Agency (ESA) in Paris to swap lessons about data storage, analysis and sharing. All labs need to manipulate data to yield research answers. As prices drop for high-throughput instruments such as automated Extremely powerful computers are needed to help biologists to handle big-data traffic jams.",
"title": ""
},
{
"docid": "3eec1e9abcb677a4bc8f054fa8827f4f",
"text": "We present a neural semantic parser that translates natural language questions into executable SQL queries with two key ideas. First, we develop an encoder-decoder model, where the decoder uses a simple type system of SQL to constraint the output prediction, and propose a value-based loss when copying from input tokens. Second, we explore using the execution semantics of SQL to repair decoded programs that result in runtime error or return empty result. We propose two modelagnostics repair approaches, an ensemble model and a local program repair, and demonstrate their effectiveness over the original model. We evaluate our model on the WikiSQL dataset and show that our model achieves close to state-of-the-art results with lesser model complexity.",
"title": ""
},
{
"docid": "a63989ee86e2a57aae2d33421c61cd68",
"text": "As the rapid growth of multi-modal data, hashing methods for cross-modal retrieval have received considerable attention. Deep-networks-based cross-modal hashing methods are appealing as they can integrate feature learning and hash coding into end-to-end trainable frameworks. However, it is still challenging to find content similarities between different modalities of data due to the heterogeneity gap. To further address this problem, we propose an adversarial hashing network with attention mechanism to enhance the measurement of content similarities by selectively focusing on informative parts of multi-modal data. The proposed new adversarial network, HashGAN, consists of three building blocks: 1) the feature learning module to obtain feature representations, 2) the generative attention module to generate an attention mask, which is used to obtain the attended (foreground) and the unattended (background) feature representations, 3) the discriminative hash coding module to learn hash functions that preserve the similarities between different modalities. In our framework, the generative module and the discriminative module are trained in an adversarial way: the generator is learned to make the discriminator cannot preserve the similarities of multi-modal data w.r.t. the background feature representations, while the discriminator aims to preserve the similarities of multimodal data w.r.t. both the foreground and the background feature representations. Extensive evaluations on several benchmark datasets demonstrate that the proposed HashGAN brings substantial improvements over other state-ofthe-art cross-modal hashing methods.",
"title": ""
},
{
"docid": "85809b8e7811adb37314da2aaa28a70c",
"text": "Underwater wireless sensor networks (UWSNs) will pave the way for a new era of underwater monitoring and actuation applications. The envisioned landscape of UWSN applications will help us learn more about our oceans, as well as about what lies beneath them. They are expected to change the current reality where no more than 5% of the volume of the oceans has been observed by humans. However, to enable large deployments of UWSNs, networking solutions toward efficient and reliable underwater data collection need to be investigated and proposed. In this context, the use of topology control algorithms for a suitable, autonomous, and on-the-fly organization of the UWSN topology might mitigate the undesired effects of underwater wireless communications and consequently improve the performance of networking services and protocols designed for UWSNs. This article presents and discusses the intrinsic properties, potentials, and current research challenges of topology control in underwater sensor networks. We propose to classify topology control algorithms based on the principal methodology used to change the network topology. They can be categorized in three major groups: power control, wireless interface mode management, and mobility assisted–based techniques. Using the proposed classification, we survey the current state of the art and present an in-depth discussion of topology control solutions designed for UWSNs.",
"title": ""
},
{
"docid": "9916cbe61d57121030ee718bc03e0c17",
"text": "We propose a novel approach for constructing effective treatment policies when the observed data is biased and lacks counterfactual information. Learning in settings where the observed data does not contain all possible outcomes for all treatments is difficult since the observed data is typically biased due to existing clinical guidelines. This is an important problem in the medical domain as collecting unbiased data is expensive and so learning from the wealth of existing biased data is a worthwhile task. Our approach separates the problem into two stages: first we reduce the bias by learning a representation map using a novel auto-encoder network – this allows us to control the trade-off between the bias-reduction and the information loss – and then we construct effective treatment policies on the transformed data using a novel feedforward network. Separation of the problem into these two stages creates an algorithm that can be adapted to the problem at hand – the bias-reduction step can be performed as a preprocessing step for other algorithms. We compare our algorithm against state-of-art algorithms on two semi-synthetic datasets and demonstrate that our algorithm achieves a significant improvement in performance.",
"title": ""
}
] | scidocsrr |
5eb55d47e3845b1d7aa97071b70fbeb5 | TopicLens: Efficient Multi-Level Visual Topic Exploration of Large-Scale Document Collections | [
{
"docid": "f6266e5c4adb4fa24cc353dccccaf6db",
"text": "Clustering plays an important role in many large-scale data analyses providing users with an overall understanding of their data. Nonetheless, clustering is not an easy task due to noisy features and outliers existing in the data, and thus the clustering results obtained from automatic algorithms often do not make clear sense. To remedy this problem, automatic clustering should be complemented with interactive visualization strategies. This paper proposes an interactive visual analytics system for document clustering, called iVisClustering, based on a widelyused topic modeling method, latent Dirichlet allocation (LDA). iVisClustering provides a summary of each cluster in terms of its most representative keywords and visualizes soft clustering results in parallel coordinates. The main view of the system provides a 2D plot that visualizes cluster similarities and the relation among data items with a graph-based representation. iVisClustering provides several other views, which contain useful interaction methods. With help of these visualization modules, we can interactively refine the clustering results in various ways.",
"title": ""
},
{
"docid": "07846c1e97f72a02d876baf4c5435da6",
"text": "Topic modeling has been widely used for analyzing text document collections. Recently, there have been significant advancements in various topic modeling techniques, particularly in the form of probabilistic graphical modeling. State-of-the-art techniques such as Latent Dirichlet Allocation (LDA) have been successfully applied in visual text analytics. However, most of the widely-used methods based on probabilistic modeling have drawbacks in terms of consistency from multiple runs and empirical convergence. Furthermore, due to the complicatedness in the formulation and the algorithm, LDA cannot easily incorporate various types of user feedback. To tackle this problem, we propose a reliable and flexible visual analytics system for topic modeling called UTOPIAN (User-driven Topic modeling based on Interactive Nonnegative Matrix Factorization). Centered around its semi-supervised formulation, UTOPIAN enables users to interact with the topic modeling method and steer the result in a user-driven manner. We demonstrate the capability of UTOPIAN via several usage scenarios with real-world document corpuses such as InfoVis/VAST paper data set and product review data sets.",
"title": ""
}
] | [
{
"docid": "6c1a1e47ce91b2d9ae60a0cfc972b7e4",
"text": "We investigate automatic classification of speculative language (‘hedging’), in biomedical text using weakly supervised machine learning. Our contributions include a precise description of the task with annotation guidelines, analysis and discussion, a probabilistic weakly supervised learning model, and experimental evaluation of the methods presented. We show that hedge classification is feasible using weakly supervised ML, and point toward avenues for future research.",
"title": ""
},
{
"docid": "1613f8b73465d52a3e850c894578ef2a",
"text": "In this paper, we evaluate the performance of Multicarrier-Low Density Spreading Multiple Access (MC-LDSMA) as a multiple access technique for mobile communication systems. The MC-LDSMA technique is compared with current multiple access techniques, OFDMA and SC-FDMA. The performance is evaluated in terms of cubic metric, block error rate, spectral efficiency and fairness. The aim is to investigate the expected gains of using MC-LDSMA in the uplink for next generation cellular systems. The simulation results of the link and system-level performance evaluation show that MC-LDSMA has significant performance improvements over SC-FDMA and OFDMA. It is shown that using MC-LDSMA can considerably reduce the required transmission power and increase the spectral efficiency and fairness among the users.",
"title": ""
},
{
"docid": "4a22a7dbcd1515e2b1b6e7748ffa3e02",
"text": "Average public feedback scores given to sellers have increased strongly over time in an online labor market. Changes in marketplace composition or improved seller performance cannot fully explain this trend. We propose that two factors inflated reputations: (1) it costs more to give bad feedback than good feedback and (2) this cost to raters is increasing in the cost to sellers from bad feedback. Together, (1) and (2) can lead to an equilibrium where feedback is always positive, regardless of performance. In response, the marketplace encouraged buyers to additionally give private feedback. This private feedback was substantially more candid and more predictive of future worker performance. When aggregates of private feedback about each job applicant were experimentally provided to employers as a private feedback score, employers used these scores when making screening and hiring decisions.",
"title": ""
},
{
"docid": "c1046ee16110438cb7d7bd0b5a9c4870",
"text": "Although integrating multiple levels of data into an analysis can often yield better inferences about the phenomenon under study, traditional methodologies used to combine multiple levels of data are problematic. In this paper, we discuss several methodologies under the rubric of multil evel analysis. Multil evel methods, we argue, provide researchers, particularly researchers using comparative data, substantial leverage in overcoming the typical problems associated with either ignoring multiple levels of data, or problems associated with combining lower-level and higherlevel data (including overcoming implicit assumptions of fixed and constant effects). The paper discusses several variants of the multil evel model and provides an application of individual-level support for European integration using comparative politi cal data from Western Europe.",
"title": ""
},
{
"docid": "71cf493e0026fe057b1100c5ad1118ad",
"text": "We explore story generation: creative systems that can build coherent and fluent passages of text about a topic. We collect a large dataset of 300K human-written stories paired with writing prompts from an online forum. Our dataset enables hierarchical story generation, where the model first generates a premise, and then transforms it into a passage of text. We gain further improvements with a novel form of model fusion that improves the relevance of the story to the prompt, and adding a new gated multi-scale self-attention mechanism to model long-range context. Experiments show large improvements over strong baselines on both automated and human evaluations. Human judges prefer stories generated by our approach to those from a strong non-hierarchical model by a factor of two to one.",
"title": ""
},
{
"docid": "628947fa49383b73eda8ad374423f8ce",
"text": "The proposed system for the cloud based automatic system involves the automatic updating of the data to the lighting system. It also reads the data from the base station in case of emergencies. Zigbee devices are used for wireless transmission of the data from the base station to the light system thus enabling an efficient street lamp control system. Infrared sensor and dimming control circuit is used to track the movement of human in a specific range and dims/bright the street lights accordingly hence saving a large amount of power. In case of emergencies data is sent from the particular light or light system and effective measures are taken accordingly.",
"title": ""
},
{
"docid": "068321516540ed9f5f05638bdfb7235a",
"text": "Cloud of Things (CoT) is a computing model that combines the widely popular cloud computing with Internet of Things (IoT). One of the major problems with CoT is the latency of accessing distant cloud resources from the devices, where the data is captured. To address this problem, paradigms such as fog computing and Cloudlets have been proposed to interpose another layer of computing between the clouds and devices. Such a three-layered cloud-fog-device computing architecture is touted as the most suitable approach for deploying many next generation ubiquitous computing applications. Programming applications to run on such a platform is quite challenging because disconnections between the different layers are bound to happen in a large-scale CoT system, where the devices can be mobile. This paper presents a programming language and system for a three-layered CoT system. We illustrate how our language and system addresses some of the key challenges in the three-layered CoT. A proof-of-concept prototype compiler and runtime have been implemented and several example applications are developed using it.",
"title": ""
},
{
"docid": "7abd63dac92df4b17fa1d7cd9e1ee039",
"text": "PURPOSE\nThis study aimed to prospectively analyze the outcomes of 304 feldspathic porcelain veneers prepared by the same operator, in 100 patients, that were in situ for up to 16 years.\n\n\nMATERIALS AND METHODS\nA total of 304 porcelain veneers on incisors, canines, and premolars in 100 patients completed by one prosthodontist between 1988 and 2003 were sequentially included. Preparations were designed with chamfer margins, incisal reduction, and palatal overlap. At least 80% of each preparation was in enamel. Feldspathic porcelain veneers from refractory dies were etched (hydrofluoric acid), silanated, and cemented (Vision 2, Mirage Dental Systems). Outcomes were expressed as percentages (success, survival, unknown, dead, repair, failure). The results were statistically analyzed using the chi-square test and Kaplan-Meier survival estimation. Statistical significance was set at P < .05.\n\n\nRESULTS\nThe cumulative survival for veneers was 96% +/- 1% at 5 to 6 years, 93% +/- 2% at 10 to 11 years, 91% +/- 3% at 12 to 13 years, and 73% +/- 16% at 15 to 16 years. The marked drop in survival between 13 and 16 years was the result of the death of 1 patient and the low number of veneers in that period. The cumulative survival was greater when different statistical methods were employed. Sixteen veneers in 14 patients failed. Failed veneers were associated with esthetics (31%), mechanical complications (31%), periodontal support (12.5%), loss of retention >2 (12.5%), caries (6%), and tooth fracture (6%). Statistically significantly fewer veneers survived as the time in situ increased.\n\n\nCONCLUSIONS\nFeldspathic porcelain veneers, when bonded to enamel substrate, offer a predictable long-term restoration with a low failure rate. The statistical methods used to calculate the cumulative survival can markedly affect the apparent outcome and thus should be clearly defined in outcome studies.",
"title": ""
},
{
"docid": "7a180e503a0b159d545047443524a05a",
"text": "We present two methods for determining the sentiment expressed by a movie review. The semantic orientation of a review can be positive, negative, or neutral. We examine the effect of valence shifters on classifying the reviews. We examine three types of valence shifters: negations, intensifiers, and diminishers. Negations are used to reverse the semantic polarity of a particular term, while intensifiers and diminishers are used to increase and decrease, respectively, the degree to which a term is positive or negative. The first method classifies reviews based on the number of positive and negative terms they contain. We use the General Inquirer to identify positive and negative terms, as well as negation terms, intensifiers, and diminishers. We also use positive and negative terms from other sources, including a dictionary of synonym differences and a very large Web corpus. To compute corpus-based semantic orientation values of terms, we use their association scores with a small group of positive and negative terms. We show that extending the term-counting method with contextual valence shifters improves the accuracy of the classification. The second method uses a Machine Learning algorithm, Support Vector Machines. We start with unigram features and then add bigrams that consist of a valence shifter and another word. The accuracy of classification is very high, and the valence shifter bigrams slightly improve it. The features that contribute to the high accuracy are the words in the lists of positive and negative terms. Previous work focused on either the term-counting method or the Machine Learning method. We show that combining the two methods achieves better results than either method alone.",
"title": ""
},
{
"docid": "f463ee2dd3a9243ed7536d88d8c2c568",
"text": "A new silicon controlled rectifier-based power-rail electrostatic discharge (ESD) clamp circuit was proposed with a novel trigger circuit that has very low leakage current in a small layout area for implementation. This circuit was successfully verified in a 40-nm CMOS process by using only low-voltage devices. The novel trigger circuit uses a diode-string based level-sensing ESD detection circuit, but not using MOS capacitor, which has very large leakage current. Moreover, the leakage current on the ESD detection circuit is further reduced, adding a diode in series with the trigger transistor. By combining these two techniques, the total silicon area of the power-rail ESD clamp circuit can be reduced three times, whereas the leakage current is three orders of magnitude smaller than that of the traditional design.",
"title": ""
},
{
"docid": "7fc6b08b5ceea71503ac2b1da7a8bdcb",
"text": "This paper introduces a method for optimizing the tiles of a quad-mesh. Given a quad-based surface, the goal is to generate a set of K quads whose instances can produce a tiled surface that approximates the input surface. A solution to the problem is a K-set tilable surface, which can lead to an effective cost reduction in the physical construction of the given surface. Rather than molding lots of different building blocks, a K-set tilable surface requires the construction of K prefabricated components only. To realize the K-set tilable surface, we use a cluster-optimize approach. First, we iteratively cluster and analyze: clusters of similar shapes are merged, while edge connections between the K quads on the target surface are analyzed to learn the induced flexibility of the K-set tilable surface. Then, we apply a non-linear optimization model with constraints that maintain the K quads connections and shapes, and show how quad-based surfaces are optimized into K-set tilable surfaces. Our algorithm is demonstrated on various surfaces, including some that mimic the exteriors of certain renowned building landmarks.",
"title": ""
},
{
"docid": "14fcb5c784de5fcb6950212f5b3eabb4",
"text": "This paper presents a pure textile, capacitive pressure sensor designed for integration into clothing to measure pressure on human body. The applications fields cover all domains where a soft and bendable sensor with a high local resolution is needed, e.g. in rehabilitation, pressure-sore prevention or motion detection due to muscle activities. We developed several textile sensors with spatial resolution of 2 times 2 cm and an average error below 4 percent within the measurement range 0 to 10 N/cm2. Applied on the upper arm the textile pressure sensor determines the deflection of the forearm between 0 and 135 degrees due to the muscle bending.",
"title": ""
},
{
"docid": "2f23d51ffd54a6502eea07883709d016",
"text": "Named entity recognition (NER) is a popular domain of natural language processing. For this reason, many tools exist to perform this task. Amongst other points, they differ in the processing method they rely upon, the entity types they can detect, the nature of the text they can handle, and their input/output formats. This makes it difficult for a user to select an appropriate NER tool for a specific situation. In this article, we try to answer this question in the context of biographic texts. For this matter, we first constitute a new corpus by annotating 247 Wikipedia articles. We then select 4 publicly available, well known and free for research NER tools for comparison: Stanford NER, Illinois NET, OpenCalais NER WS and Alias-i LingPipe. We apply them to our corpus, assess their performances and compare them. When considering overall performances, a clear hierarchy emerges: Stanford has the best results, followed by LingPipe, Illionois and OpenCalais. However, a more detailed evaluation performed relatively to entity types and article categories highlights the fact their performances are diversely influenced by those factors. This complementarity opens an interesting perspective regarding the combination of these individual tools in order to improve performance.",
"title": ""
},
{
"docid": "3db6fc042a82319935bf5dd0d1491e89",
"text": "We present a piezoelectric-on-silicon Lorentz force magnetometer (LFM) based on a mechanically coupled array of clamped–clamped beam resonators for the detection of lateral ( $xy$ plane) magnetic fields with an extended operating bandwidth of 1.36 kHz. The proposed device exploits piezoelectric transduction to greatly enhance the electromechanical coupling efficiency, which benefits the device sensitivity. Coupling multiple clamped–clamped beams increases the area for piezoelectric transduction, which further increases the sensitivity. The reported device has the widest operating bandwidth among LFMs reported to date with comparable normalized sensitivity despite the quality factor being limited to 30 when operating at ambient pressure instead of vacuum as in most cases of existing LFMs.",
"title": ""
},
{
"docid": "6e4f0a770fe2a34f99957f252110b6bd",
"text": "Universal Dependencies (UD) provides a cross-linguistically uniform syntactic representation, with the aim of advancing multilingual applications of parsing and natural language understanding. Reddy et al. (2016) recently developed a semantic interface for (English) Stanford Dependencies, based on the lambda calculus. In this work, we introduce UDEPLAMBDA, a similar semantic interface for UD, which allows mapping natural language to logical forms in an almost language-independent framework. We evaluate our approach on semantic parsing for the task of question answering against Freebase. To facilitate multilingual evaluation, we provide German and Spanish translations of the WebQuestions and GraphQuestions datasets. Results show that UDEPLAMBDA outperforms strong baselines across languages and datasets. For English, it achieves the strongest result to date on GraphQuestions, with competitive results on WebQuestions.",
"title": ""
},
{
"docid": "dd170ec01ee5b969605dace70e283664",
"text": "This work discusses the regulation of the ball and plate system, the problemis to design a control laws which generates a voltage u for the servomotors to move the ball from the actual position to a desired one. The controllers are constructed by introducing nonlinear compensation terms into the traditional PD controller. In this paper, a complete physical system and controller design is explored from conception to modeling to testing and implementation. The stability of the control is presented. Experiment results are obtained via our prototype of the ball and plate system.",
"title": ""
},
{
"docid": "ae0d8d1dec27539502cd7e3030a3fe42",
"text": "Thee KL divergence is the most commonly used measure for comparing query and document language models in the language modeling framework to ad hoc retrieval. Since KL is rank equivalent to a specific weighted geometric mean, we examine alternative weighted means for language-model comparison, as well as alternative divergence measures. The study includes analysis of the inverse document frequency (IDF) effect of the language-model comparison methods. Empirical evaluation, performed with different types of queries (short and verbose) and query-model induction approaches, shows that there are methods that often outperform the KL divergence in some settings.",
"title": ""
},
{
"docid": "f91007844639e431b2f332f6f32df33b",
"text": "Moore type II Entire Condyle fractures of the tibia plateau represent a rare and highly unstable fracture pattern that usually results from high impact traumas. Specific recommendations regarding the surgical treatment of these fractures are sparse. We present a series of Moore type II fractures treated by open reduction and internal fixation through a direct dorsal approach. Five patients (3 females, 2 males) with Entire Condyle fractures were retrospectively analyzed after a mean follow-up period of 39 months (range 12–61 months). Patient mean age at the time of operation was 36 years (range 26–43 years). Follow-up included clinical and radiological examination. Furthermore, all patient finished a SF36 and Lysholm knee score questionnaire. Average range of motion was 127/0/1° with all patients reaching full extension at the time of last follow up. Patients reached a mean Lysholm score of 81.2 points (range 61–100 points) and an average SF36 of 82.36 points (range 53.75–98.88 points). One patient sustained deep wound infection after elective implant removal 1 year after the initial surgery. Overall all patients were highly satisfied with the postoperative result. The direct dorsal approach to the tibial plateau represents an adequate method to enable direct fracture exposure, open reduction, and internal fixation in posterior shearing medial Entire Condyle fractures and is especially valuable when also the dorso-lateral plateau is depressed.",
"title": ""
},
{
"docid": "11357967d7e83c45bb1a6ba3edfebac2",
"text": "We report a unique MEMS magnetometer based on a disk shaped radial contour mode thin-film piezoelectric on silicon (TPoS) CMOS-compatible resonator. This is the first device of its kind that targets operation under atmospheric pressure conditions as opposed that existing Lorentz force MEMS magnetometers that depend on vacuum. We exploit the chosen vibration mode to enhance coupling to deliver a field sensitivity of 10.92 mV/T while operating at a resonant frequency of 6.27 MHz, despite of a sub-optimal mechanical quality (Q) factor of 697 under ambient conditions in air.",
"title": ""
},
{
"docid": "cd23761c6e6eb8be8915612c995c29e4",
"text": "In this paper, we propose a novel representation learning framework, namely HIN2Vec, for heterogeneous information networks (HINs). The core of the proposed framework is a neural network model, also called HIN2Vec, designed to capture the rich semantics embedded in HINs by exploiting different types of relationships among nodes. Given a set of relationships specified in forms of meta-paths in an HIN, HIN2Vec carries out multiple prediction training tasks jointly based on a target set of relationships to learn latent vectors of nodes and meta-paths in the HIN. In addition to model design, several issues unique to HIN2Vec, including regularization of meta-path vectors, node type selection in negative sampling, and cycles in random walks, are examined. To validate our ideas, we learn latent vectors of nodes using four large-scale real HIN datasets, including Blogcatalog, Yelp, DBLP and U.S. Patents, and use them as features for multi-label node classification and link prediction applications on those networks. Empirical results show that HIN2Vec soundly outperforms the state-of-the-art representation learning models for network data, including DeepWalk, LINE, node2vec, PTE, HINE and ESim, by 6.6% to 23.8% of $micro$-$f_1$ in multi-label node classification and 5% to 70.8% of $MAP$ in link prediction.",
"title": ""
}
] | scidocsrr |
52c1d35a8fd58fe024f3b5b19174c2ce | Blockchain And Its Applications | [
{
"docid": "469c17aa0db2c70394f081a9a7c09be5",
"text": "The potential of blockchain technology has received attention in the area of FinTech — the combination of finance and technology. Blockchain technology was first introduced as the technology behind the Bitcoin decentralized virtual currency, but there is the expectation that its characteristics of accurate and irreversible data transfer in a decentralized P2P network could make other applications possible. Although a precise definition of blockchain technology has not yet been given, it is important to consider how to classify different blockchain systems in order to better understand their potential and limitations. The goal of this paper is to add to the discussion on blockchain technology by proposing a classification based on two dimensions external to the system: (1) existence of an authority (without an authority and under an authority) and (2) incentive to participate in the blockchain (market-based and non-market-based). The combination of these elements results in four types of blockchains. We define these dimensions and describe the characteristics of the blockchain systems belonging to each classification.",
"title": ""
},
{
"docid": "4deea3312fe396f81919b07462551682",
"text": "The purpose of this paper is to explore applications of blockchain technology related to the 4th Industrial Revolution (Industry 4.0) and to present an example where blockchain is employed to facilitate machine-to-machine (M2M) interactions and establish a M2M electricity market in the context of the chemical industry. The presented scenario includes two electricity producers and one electricity consumer trading with each other over a blockchain. The producers publish exchange offers of energy (in kWh) for currency (in USD) in a data stream. The consumer reads the offers, analyses them and attempts to satisfy its energy demand at a minimum cost. When an offer is accepted it is executed as an atomic exchange (multiple simultaneous transactions). Additionally, this paper describes and discusses the research and application landscape of blockchain technology in relation to the Industry 4.0. It concludes that this technology has significant under-researched potential to support and enhance the efficiency gains of the revolution and identifies areas for future research. Producer 2 • Issue energy • Post purchase offers (as atomic transactions) Consumer • Look through the posted offers • Choose cheapest and satisfy its own demand Blockchain Stream Published offers are visible here Offer sent",
"title": ""
}
] | [
{
"docid": "98d998eae1fa7a00b73dcff0251f0bbd",
"text": "Imagery texts are usually organized as a hierarchy of several visual elements, i.e. characters, words, text lines and text blocks. Among these elements, character is the most basic one for various languages such as Western, Chinese, Japanese, mathematical expression and etc. It is natural and convenient to construct a common text detection engine based on character detectors. However, training character detectors requires a vast of location annotated characters, which are expensive to obtain. Actually, the existing real text datasets are mostly annotated in word or line level. To remedy this dilemma, we propose a weakly supervised framework that can utilize word annotations, either in tight quadrangles or the more loose bounding boxes, for character detector training. When applied in scene text detection, we are thus able to train a robust character detector by exploiting word annotations in the rich large-scale real scene text datasets, e.g. ICDAR15 [19] and COCO-text [39]. The character detector acts as a key role in the pipeline of our text detection engine. It achieves the state-of-the-art performance on several challenging scene text detection benchmarks. We also demonstrate the flexibility of our pipeline by various scenarios, including deformed text detection and math expression recognition.",
"title": ""
},
{
"docid": "d6ca38ccad91c0c2c51ba3dd5be454b2",
"text": "Dirty data is a serious problem for businesses leading to incorrect decision making, inefficient daily operations, and ultimately wasting both time and money. Dirty data often arises when domain constraints and business rules, meant to preserve data consistency and accuracy, are enforced incompletely or not at all in application code. In this work, we propose a new data-driven tool that can be used within an organization’s data quality management process to suggest possible rules, and to identify conformant and non-conformant records. Data quality rules are known to be contextual, so we focus on the discovery of context-dependent rules. Specifically, we search for conditional functional dependencies (CFDs), that is, functional dependencies that hold only over a portion of the data. The output of our tool is a set of functional dependencies together with the context in which they hold (for example, a rule that states for CS graduate courses, the course number and term functionally determines the room and instructor). Since the input to our tool will likely be a dirty database, we also search for CFDs that almost hold. We return these rules together with the non-conformant records (as these are potentially dirty records). We present effective algorithms for discovering CFDs and dirty values in a data instance. Our discovery algorithm searches for minimal CFDs among the data values and prunes redundant candidates. No universal objective measures of data quality or data quality rules are known. Hence, to avoid returning an unnecessarily large number of CFDs and only those that are most interesting, we evaluate a set of interest metrics and present comparative results using real datasets. We also present an experimental study showing the scalability of our techniques.",
"title": ""
},
{
"docid": "d65376ed544623a927a868b35394409e",
"text": "The balance compensating techniques for asymmetric Marchand balun are presented in this letter. The amplitude and phase difference are characterized explicitly by S21 and S31, from which the factors responsible for the balance compensating are determined. Finally, two asymmetric Marchand baluns, which have normal and enhanced balance compensation, respectively, are designed and fabricated in a 0.18 μm CMOS technology for demonstration. The simulation and measurement results show that the proposed balance compensating techniques are valid in a very wide frequency range up to millimeter-wave (MMW) band.",
"title": ""
},
{
"docid": "99c29c6cacb623a857817c412d6d9515",
"text": "Considering the rapid growth of China’s elderly rural population, establishing both an adequate and a financially sustainable rural pension system is a major challenge. Focusing on financial sustainability, this article defines this concept of financial sustainability before constructing sound actuarial models for China’s rural pension system. Based on these models and statistical data, the analysis finds that the rural pension funding gap should rise from 97.80 billion Yuan in 2014 to 3062.31 billion Yuan in 2049, which represents an annual growth rate of 10.34%. This implies that, as it stands, the rural pension system in China is not financially sustainable. Finally, the article explains how this problem could be fixed through policy recommendations based on recent international experiences.",
"title": ""
},
{
"docid": "b8fa649e8b5a60a05aad257a0a364b51",
"text": "This work intends to build a Game Mechanics Ontology based on the mechanics category presented in BoardGameGeek.com vis à vis the formal concepts from the MDA framework. The 51 concepts presented in BoardGameGeek (BGG) as game mechanics are analyzed and arranged in a systemic way in order to build a domain sub-ontology in which the root concept is the mechanics as defined in MDA. The relations between the terms were built from its available descriptions as well as from the authors’ previous experiences. Our purpose is to show that a set of terms commonly accepted by players can lead us to better understand how players perceive the games components that are closer to the designer. The ontology proposed in this paper is not exhaustive. The intent of this work is to supply a tool to game designers, scholars, and others that see game artifacts as study objects or are interested in creating games. However, although it can be used as a starting point for games construction or study, the proposed Game Mechanics Ontology should be seen as the seed of a domain ontology encompassing game mechanics in general.",
"title": ""
},
{
"docid": "117c66505964344d9c350a4e57a4a936",
"text": "Sorting is a key kernel in numerous big data application including database operations, graphs and text analytics. Due to low control overhead, parallel bitonic sorting networks are usually employed for hardware implementations to accelerate sorting. Although a typical implementation of merge sort network can lead to low latency and small memory usage, it suffers from low throughput due to the lack of parallelism in the final stage. We analyze a pipelined merge sort network, showing its theoretical limits in terms of latency, memory and, throughput. To increase the throughput, we propose a merge sort based hybrid design where the final few stages in the merge sort network are replaced with “folded” bitonic merge networks. In these “folded” networks, all the interconnection patterns are realized by streaming permutation networks (SPN). We present a theoretical analysis to quantify latency, memory and throughput of our proposed design. Performance evaluations are performed by experiments on Xilinx Virtex-7 FPGA with post place-androute results. We demonstrate that our implementation achieves a throughput close to 10 GBps, outperforming state-of-the-art implementation of sorting on the same hardware by 1.2x, while preserving lower latency and higher memory efficiency.",
"title": ""
},
{
"docid": "28fa91e4476522f895a6874ebc967cfa",
"text": "The lifetime of micro electro–thermo–mechanical actuators with complex electro–thermo–mechanical coupling mechanisms can be decreased significantly due to unexpected failure events. Even more serious is the fact that various failures are tightly coupled due to micro-size and multi-physics effects. Interrelation between performance and potential failures should be established to predict reliability of actuators and improve their design. Thus, a multiphysics modeling approach is proposed to evaluate such interactive effects of failure mechanisms on actuators, where potential failures are pre-analyzed via FMMEA (Failure Modes, Mechanisms, and Effects Analysis) tool for guiding the electro–thermo–mechanical-reliability modeling process. Peak values of temperature, thermal stresses/strains and tip deflection are estimated as indicators for various failure modes and factors (e.g. residual stresses, thermal fatigue, electrical overstress, plastic deformation and parameter variations). Compared with analytical solutions and experimental data, the obtained simulation results were found suitable for coupled performance and reliability analysis of micro actuators and assessment of their design.",
"title": ""
},
{
"docid": "e502cdbbbf557c8365b0d4b69745e225",
"text": "This half-day hands-on studio will teach how to design and develop effective interfaces for head mounted and wrist worn wearable computers through the application of user-centered design principles. Attendees will learn gain the knowledge and tools needed to rapidly develop prototype applications, and also complete a hands-on design task. They will also learn good design guidelines for wearable systems and how to apply those guidelines. A variety of tools will be used that do not require any hardware or software experience, many of which are free and/or open source. Attendees will also be provided with material that they can use to continue their learning after the studio is over.",
"title": ""
},
{
"docid": "7e004a7b6a39ff29176dd19a07c15448",
"text": "All humans will become presbyopic as part of the aging process where the eye losses the ability to focus at different depths. Progressive additive lenses (PALs) allow a person to focus on objects located at near versus far by combing lenses of different strengths within the same spectacle. However, it is unknown why some patients easily adapt to wearing these lenses while others struggle and complain of vertigo, swim, and nausea as well as experience difficulties with balance. Sixteen presbyopes (nine who adapted to PALs and seven who had tried but could not adapt) participated in this study. This research investigated vergence dynamics and its adaptation using a short-term motor learning experiment to asses the ability to adapt. Vergence dynamics were on average faster and the ability to change vergence dynamics was also greater for presbyopes who adapted to progressive lenses compared to those who could not. Data suggest that vergence dynamics and its adaptation may be used to predict which patients will easily adapt to progressive lenses and discern those who will have difficulty.",
"title": ""
},
{
"docid": "6f9afe3cbf5cc675c6b4e96ee2ccfa76",
"text": "As more firms begin to collect (and seek value from) richer customer-level datasets, a focus on the emerging concept of customer-base analysis is becoming increasingly common and critical. Such analyses include forward-looking projections ranging from aggregate-level sales trajectories to individual-level conditional expectations (which, in turn, can be used to derive estimates of customer lifetime value). We provide an overview of a class of parsimonious models (called probability models) that are well-suited to meet these rising challenges. We first present a taxonomy that captures some of the key distinctions across different kinds of business settings and customer relationships, and identify some of the unique modeling and measurement issues that arise across them. We then provide deeper coverage of these modeling issues, first for noncontractual settings (i.e., situations in which customer “death” is unobservable), then contractual ones (i.e., situations in which customer “death” can be observed). We review recent literature in these areas, highlighting substantive insights that arise from the research as well as the methods used to capture them. We focus on practical applications that use appropriately chosen data summaries (such as recency and frequency) and rely on commonly available software packages (such as Microsoft Excel). n 2009 Direct Marketing Educational Foundation, Inc. Published by Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "213313382d4e5d24a065d551012887ed",
"text": "The authors present full wave simulations and experimental results of propagation of electromagnetic waves in shallow seawaters. Transmitter and receiver antennas are ten-turns loops placed on the seabed. Some propagation frameworks are presented and simulated. Finally, simulation results are compared with experimental ones.",
"title": ""
},
{
"docid": "b02dcd4d78f87d8ac53414f0afd8604b",
"text": "This paper presents an ultra-low-power event-driven analog-to-digital converter (ADC) with real-time QRS detection for wearable electrocardiogram (ECG) sensors in wireless body sensor network (WBSN) applications. Two QRS detection algorithms, pulse-triggered (PUT) and time-assisted PUT (t-PUT), are proposed based on the level-crossing events generated from the ADC. The PUT detector achieves 97.63% sensitivity and 97.33% positive prediction in simulation on the MIT-BIH Arrhythmia Database. The t-PUT improves the sensitivity and positive prediction to 97.76% and 98.59% respectively. Fabricated in 0.13 μm CMOS technology, the ADC with QRS detector consumes only 220 nW measured under 300 mV power supply, making it the first nanoWatt compact analog-to-information (A2I) converter with embedded QRS detector.",
"title": ""
},
{
"docid": "caab00ae6fcae59258ad4e45f787db64",
"text": "Traditional bullying has received considerable research but the emerging phenomenon of cyber-bullying much less so. Our study aims to investigate environmental and psychological factors associated with traditional and cyber-bullying. In a school-based 2-year prospective survey, information was collected on 1,344 children aged 10 including bullying behavior/experience, depression, anxiety, coping strategies, self-esteem, and psychopathology. Parents reported demographic data, general health, and attention-deficit hyperactivity disorder (ADHD) symptoms. These were investigated in relation to traditional and cyber-bullying perpetration and victimization at age 12. Male gender and depressive symptoms were associated with all types of bullying behavior and experience. Living with a single parent was associated with perpetration of traditional bullying while higher ADHD symptoms were associated with victimization from this. Lower academic achievement and lower self esteem were associated with cyber-bullying perpetration and victimization, and anxiety symptoms with cyber-bullying perpetration. After adjustment, previous bullying perpetration was associated with victimization from cyber-bullying but not other outcomes. Cyber-bullying has differences in predictors from traditional bullying and intervention programmes need to take these into consideration.",
"title": ""
},
{
"docid": "e5aed574fbe4560a794cf8b77fb84192",
"text": "Warping is one of the basic image processing techniques. Directly applying existing monocular image warping techniques to stereoscopic images is problematic as it often introduces vertical disparities and damages the original disparity distribution. In this paper, we show that these problems can be solved by appropriately warping both the disparity map and the two images of a stereoscopic image. We accordingly develop a technique for extending existing image warping algorithms to stereoscopic images. This technique divides stereoscopic image warping into three steps. Our method first applies the user-specified warping to one of the two images. Our method then computes the target disparity map according to the user specified warping. The target disparity map is optimized to preserve the perceived 3D shape of image content after image warping. Our method finally warps the other image using a spatially-varying warping method guided by the target disparity map. Our experiments show that our technique enables existing warping methods to be effectively applied to stereoscopic images, ranging from parametric global warping to non-parametric spatially-varying warping.",
"title": ""
},
{
"docid": "22bb6af742b845dea702453b6b14ef3a",
"text": "Errors are prevalent in data sequences, such as GPS trajectories or sensor readings. Existing methods on cleaning sequential data employ a constraint on value changing speeds and perform constraint-based repairing. While such speed constraints are effective in identifying large spike errors, the small errors that do not significantly deviate from the truth and indeed satisfy the speed constraints can hardly be identified and repaired. To handle such small errors, in this paper, we propose a statistical based cleaning method. Rather than declaring a broad constraint of max/min speeds, we model the probability distribution of speed changes. The repairing problem is thus to maximize the likelihood of the sequence w.r.t. the probability of speed changes. We formalize the likelihood-based cleaning problem, show its NP-hardness, devise exact algorithms, and propose several approximate/heuristic methods to trade off effectiveness for efficiency. Experiments on real data sets (in various applications) demonstrate the superiority of our proposal.",
"title": ""
},
{
"docid": "cc8a4744f05d5f46feacaff27b91a86c",
"text": "In the recent past, several sampling-based algorithms have been proposed to compute trajectories that are collision-free and dynamically-feasible. However, the outputs of such algorithms are notoriously jagged. In this paper, by focusing on robots with car-like dynamics, we present a fast and simple heuristic algorithm, named Convex Elastic Smoothing (CES) algorithm, for trajectory smoothing and speed optimization. The CES algorithm is inspired by earlier work on elastic band planning and iteratively performs shape and speed optimization. The key feature of the algorithm is that both optimization problems can be solved via convex programming, making CES particularly fast. A range of numerical experiments show that the CES algorithm returns high-quality solutions in a matter of a few hundreds of milliseconds and hence appears amenable to a real-time implementation.",
"title": ""
},
{
"docid": "f44d3512cd8658f824b0ba0ea5a69e4a",
"text": "Customer retention is a major issue for various service-based organizations particularly telecom industry, wherein predictive models for observing the behavior of customers are one of the great instruments in customer retention process and inferring the future behavior of the customers. However, the performances of predictive models are greatly affected when the real-world data set is highly imbalanced. A data set is called imbalanced if the samples size from one class is very much smaller or larger than the other classes. The most commonly used technique is over/under sampling for handling the class-imbalance problem (CIP) in various domains. In this paper, we survey six well-known sampling techniques and compare the performances of these key techniques, i.e., mega-trend diffusion function (MTDF), synthetic minority oversampling technique, adaptive synthetic sampling approach, couples top-N reverse k-nearest neighbor, majority weighted minority oversampling technique, and immune centroids oversampling technique. Moreover, this paper also reveals the evaluation of four rules-generation algorithms (the learning from example module, version 2 (LEM2), covering, exhaustive, and genetic algorithms) using publicly available data sets. The empirical results demonstrate that the overall predictive performance of MTDF and rules-generation based on genetic algorithms performed the best as compared with the rest of the evaluated oversampling methods and rule-generation algorithms.",
"title": ""
},
{
"docid": "3e9de22ac9f81cf3233950a0d72ef15a",
"text": "Increasing of head rise (HR) and decreasing of head loss (HL), simultaneously, are important purpose in the design of different types of fans. Therefore, multi-objective optimization process is more applicable for the design of such turbo machines. In the present study, multi-objective optimization of Forward-Curved (FC) blades centrifugal fans is performed at three steps. At the first step, Head rise (HR) and the Head loss (HL) in a set of FC centrifugal fan is numerically investigated using commercial software NUMECA. Two meta-models based on the evolved group method of data handling (GMDH) type neural networks are obtained, at the second step, for modeling of HR and HL with respect to geometrical design variables. Finally, using obtained polynomial neural networks, multi-objective genetic algorithms are used for Pareto based optimization of FC centrifugal fans considering two conflicting objectives, HR and HL. It is shown that some interesting and important relationships as useful optimal design principles involved in the performance of FC fans can be discovered by Pareto based multi-objective optimization of the obtained polynomial meta-models representing their HR and HL characteristics. Such important optimal principles would not have been obtained without the use of both GMDH type neural network modeling and the Pareto optimization approach.",
"title": ""
},
{
"docid": "bddf8420c2dd67dd5be10556088bf653",
"text": "The Hadoop Distributed File System (HDFS) is a distributed storage system that stores large-scale data sets reliably and streams those data sets to applications at high bandwidth. HDFS provides high performance, reliability and availability by replicating data, typically three copies of every data. The data in HDFS changes in popularity over time. To get better performance and higher disk utilization, the replication policy of HDFS should be elastic and adapt to data popularity. In this paper, we describe ERMS, an elastic replication management system for HDFS. ERMS provides an active/standby storage model for HDFS. It utilizes a complex event processing engine to distinguish real-time data types, and then dynamically increases extra replicas for hot data, cleans up these extra replicas when the data cool down, and uses erasure codes for cold data. ERMS also introduces a replica placement strategy for the extra replicas of hot data and erasure coding parities. The experiments show that ERMS effectively improves the reliability and performance of HDFS and reduce storage overhead.",
"title": ""
},
{
"docid": "40beda0d1e99f4cc5a15a3f7f6438ede",
"text": "One of the major challenges with electric shipboard power systems (SPS) is preserving the survivability of the system under fault situations. Some minor faults in SPS can result in catastrophic consequences. Therefore, it is essential to investigate available fault management techniques for SPS applications that can enhance SPS robustness and reliability. Many recent studies in this area take different approaches to address fault tolerance in SPSs. This paper provides an overview of the concepts and methodologies that are utilized to deal with faults in the electric SPS. First, a taxonomy of the types of faults and their sources in SPS is presented; then, the methods that are used to detect, identify, isolate, and manage faults are reviewed. Furthermore, common techniques for designing a fault management system in SPS are analyzed and compared. This paper also highlights several possible future research directions.",
"title": ""
}
] | scidocsrr |
563185b3c4f805438a9fbd53f5aeb52c | A Knowledge-Grounded Neural Conversation Model | [
{
"docid": "5cc1f15c45f57d1206e9181dc601ee4a",
"text": "In an end-to-end dialog system, the aim of dialog state tracking is to accurately estimate a compact representation of the current dialog status from a sequence of noisy observations produced by the speech recognition and the natural language understanding modules. This paper introduces a novel method of dialog state tracking based on the general paradigm of machine reading and proposes to solve it using an End-to-End Memory Network, MemN2N, a memory-enhanced neural network architecture. We evaluate the proposed approach on the second Dialog State Tracking Challenge (DSTC-2) dataset. The corpus has been converted for the occasion in order to frame the hidden state variable inference as a questionanswering task based on a sequence of utterances extracted from a dialog. We show that the proposed tracker gives encouraging results. Then, we propose to extend the DSTC-2 dataset and the definition of this dialog state task with specific reasoning capabilities like counting, list maintenance, yes-no question answering and indefinite knowledge management. Finally, we present encouraging results using our proposed MemN2N based tracking model.",
"title": ""
},
{
"docid": "9b30a07edc14ed2d1132421d8f372cd2",
"text": "Even when the role of a conversational agent is well known users persist in confronting them with Out-of-Domain input. This often results in inappropriate feedback, leaving the user unsatisfied. In this paper we explore the automatic creation/enrichment of conversational agents’ knowledge bases by taking advantage of natural language interactions present in the Web, such as movies subtitles. Thus, we introduce Filipe, a chatbot that answers users’ request by taking advantage of a corpus of turns obtained from movies subtitles (the Subtle corpus). Filipe is based on Say Something Smart, a tool responsible for indexing a corpus of turns and selecting the most appropriate answer, which we fully describe in this paper. Moreover, we show how this corpus of turns can help an existing conversational agent to answer Out-of-Domain interactions. A preliminary evaluation is also presented.",
"title": ""
},
{
"docid": "56bad8cef0c8ed0af6882dbc945298ef",
"text": "We describe a new class of learning models called memory networks. Memory networks reason with inference components combined with a long-term memory component; they learn how to use these jointly. The long-term memory can be read and written to, with the goal of using it for prediction. We investigate these models in the context of question answering (QA) where the long-term memory effectively acts as a (dynamic) knowledge base, and the output is a textual response. We evaluate them on a large-scale QA task, and a smaller, but more complex, toy task generated from a simulated world. In the latter, we show the reasoning power of such models by chaining multiple supporting sentences to answer questions that require understanding the intension of verbs.",
"title": ""
}
] | [
{
"docid": "c2cb1c6fcf040fa6514c2e281b3bfacb",
"text": "We analyze the line simpli cation algorithm reported by Douglas and Peucker and show that its worst case is quadratic in n, the number of input points. Then we give a algorithm, based on path hulls, that uses the geometric structure of the problem to attain a worst-case running time proportional to n log 2 n, which is the best case of the Douglas algorithm. We give complete C code and compare the two algorithms theoretically, by operation counts, and practically, by machine timings.",
"title": ""
},
{
"docid": "87e0bec51e1188b7c8ae88c2e111b2b5",
"text": "For the last few years, the EC Commission has been reviewing its application of Article 82EC which prohibits the abuse of a dominant position on the Common Market. The review has resulted in a Communication from the EC Commission which for the first time sets out its enforcement priorities under Article 82EC. The review had been limited to the so-called ‘exclusionary’ abuses and excluded ‘exploitative’ abuses; the enforcement priorities of the EC Commission set out in the Guidance (2008) are also limited to ‘exclusionary’ abuses. This is, however, odd since the EC Commission expresses the objective of Article 82EC as enhancing consumer welfare: exploitative abuses can directly harm consumers unlike exclusionary abuses which can only indirectly harm consumers as the result of exclusion of competitors. This paper questions whether and under which circumstances exploitation can and/or should be found ‘abusive’. It argues that ‘exploitative’ abuse can and should be used as the test of anticompetitive effects on the market under an effects-based approach and thus conduct should only be found abusive if it is ‘exploitative’. Similarly, mere exploitation does not demonstrate harm to competition and without the latter, exploitation on its own should not be found abusive. December 2008",
"title": ""
},
{
"docid": "059aed9f2250d422d76f3e24fd62bed8",
"text": "Single case studies led to the discovery and phenomenological description of Gelotophobia and its definition as the pathological fear of appearing to social partners as a ridiculous object (Titze 1995, 1996, 1997). The aim of the present study is to empirically examine the core assumptions about the fear of being laughed at in a sample comprising a total of 863 clinical and non-clinical participants. Discriminant function analysis yielded that gelotophobes can be separated from other shame-based neurotics, non-shamebased neurotics, and controls. Separation was best for statements specifically describing the gelotophobic symptomatology and less potent for more general questions describing socially avoidant behaviors. Factor analysis demonstrates that while Gelotophobia is composed of a set of correlated elements in homogenous samples, overall the concept is best conceptualized as unidimensional. Predicted and actual group membership converged well in a cross-classification (approximately 69% of correctly classified cases). Overall, it can be concluded that the fear of being laughed at varies tremendously among adults and might hold a key to understanding certain forms",
"title": ""
},
{
"docid": "2eebc7477084b471f9e9872ba8751359",
"text": "Despite significant progress in the development of human action detection datasets and algorithms, no current dataset is representative of real-world aerial view scenarios. We present Okutama-Action, a new video dataset for aerial view concurrent human action detection. It consists of 43 minute-long fully-annotated sequences with 12 action classes. Okutama-Action features many challenges missing in current datasets, including dynamic transition of actions, significant changes in scale and aspect ratio, abrupt camera movement, as well as multi-labeled actors. As a result, our dataset is more challenging than existing ones, and will help push the field forward to enable real-world applications.",
"title": ""
},
{
"docid": "a425425658207587c079730a68599572",
"text": "Your use of the JSTOR archive indicates your acceptance of JSTOR's Terms and Conditions of Use, available at http://www.jstoLorg/aboutiterms.html. JSTOR's Terms and Conditions ofDse provides, in part, that unless you have obtained prior permission, you may not download an entire issue of a journal or multiple copies of articles, and you may use content in the JSTOR archive only for your personal, non-commercial use. Each copy of any part of a JSTOR transmission must contain the same copyright notice that appears on the screen or printed page of such transmission. Operations Research is published by INFORMS. Please contact the publisher for further permissions regarding the use of this work. Publisher contact information may be obtained at http://www.jstor.org/jowllalslinforms.html.",
"title": ""
},
{
"docid": "37e7ee6d3cc3a999ba7f4bd6dbaa27e7",
"text": "Medical image fusion is the process of registering and combining multiple images from single or multiple imaging modalities to improve the imaging quality and reduce randomness and redundancy in order to increase the clinical applicability of medical images for diagnosis and assessment of medical problems. Multi-modal medical image fusion algorithms and devices have shown notable achievements in improving clinical accuracy of decisions based on medical images. This review article provides a factual listing of methods and summarizes the broad scientific challenges faced in the field of medical image fusion. We characterize the medical image fusion research based on (1) the widely used image fusion methods, (2) imaging modalities, and (3) imaging of organs that are under study. This review concludes that even though there exists several open ended technological and scientific challenges, the fusion of medical images has proved to be useful for advancing the clinical reliability of using medical imaging for medical diagnostics and analysis, and is a scientific discipline that has the potential to significantly grow in the coming years.",
"title": ""
},
{
"docid": "7b7b0c7ef54255839f9ff9d09669fe11",
"text": "Numerous recommendation approaches are in use today. However, comparing their effectiveness is a challenging task because evaluation results are rarely reproducible. In this article, we examine the challenge of reproducibility in recommender-system research. We conduct experiments using Plista’s news recommender system, and Docear’s research-paper recommender system. The experiments show that there are large discrepancies in the effectiveness of identical recommendation approaches in only slightly different scenarios, as well as large discrepancies for slightly different approaches in identical scenarios. For example, in one news-recommendation scenario, the performance of a content-based filtering approach was twice as high as the second-best approach, while in another scenario the same content-based filtering approach was the worst performing approach. We found several determinants that may contribute to the large discrepancies observed in recommendation effectiveness. Determinants we examined include user characteristics (gender and age), datasets, weighting schemes, the time at which recommendations were shown, and user-model size. Some of the determinants have interdependencies. For instance, the optimal size of an algorithms’ user model depended on users’ age. Since minor variations in approaches and scenarios can lead to significant changes in a recommendation approach’s performance, ensuring reproducibility of experimental results is difficult. We discuss these findings and conclude that to ensure reproducibility, the recommender-system community needs to (1) survey other research fields and learn from them, (2) find a common understanding of reproducibility, (3) identify and understand the determinants that affect reproducibility, (4) conduct more comprehensive experiments, (5) modernize publication practices, (6) foster the development and use of recommendation frameworks, and (7) establish best-practice guidelines for recommender-systems research.",
"title": ""
},
{
"docid": "ed6e8f1d3bcfdd7586af7ed2541bf23b",
"text": "Many real-world datasets are comprised of different representations or views which often provide information complementary to each other. To integrate information from multiple views in the unsupervised setting, multiview clustering algorithms have been developed to cluster multiple views simultaneously to derive a solution which uncovers the common latent structure shared by multiple views. In this paper, we propose a novel NMFbased multi-view clustering algorithm by searching for a factorization that gives compatible clustering solutions across multiple views. The key idea is to formulate a joint matrix factorization process with the constraint that pushes clustering solution of each view towards a common consensus instead of fixing it directly. The main challenge is how to keep clustering solutions across different views meaningful and comparable. To tackle this challenge, we design a novel and effective normalization strategy inspired by the connection between NMF and PLSA. Experimental results on synthetic and several real datasets demonstrate the effectiveness of our approach.",
"title": ""
},
{
"docid": "edf52710738647f7ebd4c017ddf56c2c",
"text": "Tasks like search-and-rescue and urban reconnaissance benefit from large numbers of robots working together, but high levels of autonomy are needed in order to reduce operator requirements to practical levels. Reducing the reliance of such systems on human operators presents a number of technical challenges including automatic task allocation, global state and map estimation, robot perception, path planning, communications, and human-robot interfaces. This paper describes our 14-robot team, designed to perform urban reconnaissance missions, that won the MAGIC 2010 competition. This paper describes a variety of autonomous systems which require minimal human effort to control a large number of autonomously exploring robots. Maintaining a consistent global map, essential for autonomous planning and for giving humans situational awareness, required the development of fast loop-closing, map optimization, and communications algorithms. Key to our approach was a decoupled centralized planning architecture that allowed individual robots to execute tasks myopically, but whose behavior was coordinated centrally. In this paper, we will describe technical contributions throughout our system that played a significant role in the performance of our system. We will also present results from our system both from the competition and from subsequent quantitative evaluations, pointing out areas in which the system performed well and where interesting research problems remain.",
"title": ""
},
{
"docid": "758eb7a0429ee116f7de7d53e19b3e02",
"text": "With the rapid development of the Internet, many types of websites have been developed. This variety of websites makes it necessary to adopt systemized evaluation criteria with a strong theoretical basis. This study proposes a set of evaluation criteria derived from an architectural perspective which has been used for over a 1000 years in the evaluation of buildings. The six evaluation criteria are internal reliability and external security for structural robustness, useful content and usable navigation for functional utility, and system interface and communication interface for aesthetic appeal. The impacts of the six criteria on user satisfaction and loyalty have been investigated through a large-scale survey. The study results indicate that the six criteria have different impacts on user satisfaction for different types of websites, which can be classified along two dimensions: users’ goals and users’ activity levels.",
"title": ""
},
{
"docid": "81a45cb4ca02c38839a81ad567eb1491",
"text": "Big data is often mined using clustering algorithms. Density-Based Spatial Clustering of Applications with Noise (DBSCAN) is a popular spatial clustering algorithm. However, it is computationally expensive and thus for clustering big data, parallel processing is required. The two prevalent paradigms for parallel processing are High-Performance Computing (HPC) based on Message Passing Interface (MPI) or Open Multi-Processing (OpenMP) and the newer big data frameworks such as Apache Spark or Hadoop. This report surveys for these two different paradigms publicly available implementations that aim at parallelizing DBSCAN and compares their performance. As a result, it is found that the big data implementations are not yet mature and in particular for skewed data, the implementation’s decomposition of the input data into parallel tasks has a huge influence on the performance in terms of running time.",
"title": ""
},
{
"docid": "69ab1b5f07c307397253f6619681a53f",
"text": "BACKGROUND\nIncreasing evidence demonstrates that motor-skill memories improve across a night of sleep, and that non-rapid eye movement (NREM) sleep commonly plays a role in orchestrating these consolidation enhancements. Here we show the benefit of a daytime nap on motor memory consolidation and its relationship not simply with global sleep-stage measures, but unique characteristics of sleep spindles at regionally specific locations; mapping to the corresponding memory representation.\n\n\nMETHODOLOGY/PRINCIPAL FINDINGS\nTwo groups of subjects trained on a motor-skill task using their left hand - a paradigm known to result in overnight plastic changes in the contralateral, right motor cortex. Both groups trained in the morning and were tested 8 hr later, with one group obtaining a 60-90 minute intervening midday nap, while the other group remained awake. At testing, subjects that did not nap showed no significant performance improvement, yet those that did nap expressed a highly significant consolidation enhancement. Within the nap group, the amount of offline improvement showed a significant correlation with the global measure of stage-2 NREM sleep. However, topographical sleep spindle analysis revealed more precise correlations. Specifically, when spindle activity at the central electrode of the non-learning hemisphere (left) was subtracted from that in the learning hemisphere (right), representing the homeostatic difference following learning, strong positive relationships with offline memory improvement emerged-correlations that were not evident for either hemisphere alone.\n\n\nCONCLUSIONS/SIGNIFICANCE\nThese results demonstrate that motor memories are dynamically facilitated across daytime naps, enhancements that are uniquely associated with electrophysiological events expressed at local, anatomically discrete locations of the brain.",
"title": ""
},
{
"docid": "cc33bcc919e5878fa17fd17b63bb8a34",
"text": "This paper deals with mean-field Eshelby-based homogenization techniques for multi-phase composites and focuses on three subjects which in our opinion deserved more attention than they did in the existing literature. Firstly, for two-phase composites, that is when in a given representative volume element all the inclusions have the same material properties, aspect ratio and orientation, an interpolative double inclusion model gives perhaps the best predictions to date for a wide range of volume fractions and stiffness contrasts. Secondly, for multi-phase composites (including two-phase composites with non-aligned inclusions as a special case), direct homogenization schemes might lead to a non-symmetric overall stiffness tensor, while a two-step homogenization procedure gives physically acceptable results. Thirdly, a general procedure allows to formulate the thermo-elastic version of any homogenization model defined by its isothermal strain concentration tensors. For all three subjects, the theory is presented in detail and validated against experimental data or finite element results for numerous composite systems. 2004 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "224a2739ade3dd64e474f5c516db89a7",
"text": "Big data storage and processing are considered as one of the main applications for cloud computing systems. Furthermore, the development of the Internet of Things (IoT) paradigm has advanced the research on Machine to Machine (M2M) communications and enabled novel tele-monitoring architectures for E-Health applications. However, there is a need for converging current decentralized cloud systems, general software for processing big data and IoT systems. The purpose of this paper is to analyze existing components and methods of securely integrating big data processing with cloud M2M systems based on Remote Telemetry Units (RTUs) and to propose a converged E-Health architecture built on Exalead CloudView, a search based application. Finally, we discuss the main findings of the proposed implementation and future directions.",
"title": ""
},
{
"docid": "c3eaaa0812eb9ab7e5402339733daa28",
"text": "BACKGROUND\nHypovitaminosis D and a low calcium intake contribute to increased parathyroid function in elderly persons. Calcium and vitamin D supplements reduce this secondary hyperparathyroidism, but whether such supplements reduce the risk of hip fractures among elderly people is not known.\n\n\nMETHODS\nWe studied the effects of supplementation with vitamin D3 (cholecalciferol) and calcium on the frequency of hip fractures and other nonvertebral fractures, identified radiologically, in 3270 healthy ambulatory women (mean [+/- SD] age, 84 +/- 6 years). Each day for 18 months, 1634 women received tricalcium phosphate (containing 1.2 g of elemental calcium) and 20 micrograms (800 IU) of vitamin D3, and 1636 women received a double placebo. We measured serial serum parathyroid hormone and 25-hydroxyvitamin D (25(OH)D) concentrations in 142 women and determined the femoral bone mineral density at base line and after 18 months in 56 women.\n\n\nRESULTS\nAmong the women who completed the 18-month study, the number of hip fractures was 43 percent lower (P = 0.043) and the total number of nonvertebral fractures was 32 percent lower (P = 0.015) among the women treated with vitamin D3 and calcium than among those who received placebo. The results of analyses according to active treatment and according to intention to treat were similar. In the vitamin D3-calcium group, the mean serum parathyroid hormone concentration had decreased by 44 percent from the base-line value at 18 months (P < 0.001) and the serum 25(OH)D concentration had increased by 162 percent over the base-line value (P < 0.001). The bone density of the proximal femur increased 2.7 percent in the vitamin D3-calcium group and decreased 4.6 percent in the placebo group (P < 0.001).\n\n\nCONCLUSIONS\nSupplementation with vitamin D3 and calcium reduces the risk of hip fractures and other nonvertebral fractures among elderly women.",
"title": ""
},
{
"docid": "f62b7597dd84e4bb18a32fc1e5713394",
"text": "Automated personality prediction from social media is gaining increasing attention in natural language processing and social sciences communities. However, due to high labeling costs and privacy issues, the few publicly available datasets are of limited size and low topic diversity. We address this problem by introducing a large-scale dataset derived from Reddit, a source so far overlooked for personality prediction. The dataset is labeled with Myers-Briggs Type Indicators (MBTI) and comes with a rich set of features for more than 9k users. We carry out a preliminary feature analysis, revealing marked differences between the MBTI dimensions and poles. Furthermore, we use the dataset to train and evaluate benchmark personality prediction models, achieving macro F1-scores between 67% and 82% on the individual dimensions and 82% accuracy for exact or one-off accurate type prediction. These results are encouraging and comparable with the reliability of standardized tests.",
"title": ""
},
{
"docid": "bd88c04b8862f699e122e248ef416963",
"text": "Optic ataxia is a high-order deficit in reaching to visual goals that occurs with posterior parietal cortex (PPC) lesions. It is a component of Balint's syndrome that also includes attentional and gaze disorders. Aspects of optic ataxia are misreaching in the contralesional visual field, difficulty preshaping the hand for grasping, and an inability to correct reaches online. Recent research in nonhuman primates (NHPs) suggests that many aspects of Balint's syndrome and optic ataxia are a result of damage to specific functional modules for reaching, saccades, grasp, attention, and state estimation. The deficits from large lesions in humans are probably composite effects from damage to combinations of these functional modules. Interactions between these modules, either within posterior parietal cortex or downstream within frontal cortex, may account for more complex behaviors such as hand-eye coordination and reach-to-grasp.",
"title": ""
},
{
"docid": "8e770bdbddbf28c1a04da0f9aad4cf16",
"text": "This paper presents a novel switch-mode power amplifier based on a multicell multilevel circuit topology. The total output voltage of the system is formed by series connection of several switching cells having a low dc-link voltage. Therefore, the cells can be realized using modern low-voltage high-current power MOSFET devices and the dc link can easily be buffered by rechargeable batteries or “super” capacitors to achieve very high amplifier peak output power levels (“flying-battery” concept). The cells are operated in a phase-shifted interleaved pulsewidth-modulation mode, which, in connection with the low partial voltage of each cell, reduces the filtering effort at the output of the total amplifier to a large extent and, consequently, improves the dynamic system behavior. The paper describes the operating principle of the system, analyzes the fundamental relationships being relevant for the circuit design, and gives guidelines for the dimensioning of the control circuit. Furthermore, simulation results as well as results of measurements taken from a laboratory setup are presented.",
"title": ""
},
{
"docid": "b8a5d42e3ca09ac236414cd0081f5d48",
"text": "Convolution Neural Networks on Graphs are important generalization and extension of classical CNNs. While previous works generally assumed that the graph structures of samples are regular with unified dimensions, in many applications, they are highly diverse or even not well defined. Under some circumstances, e.g. chemical molecular data, clustering or coarsening for simplifying the graphs is hard to be justified chemically. In this paper, we propose a more general and flexible graph convolution network (EGCN) fed by batch of arbitrarily shaped data together with their evolving graph Laplacians trained in supervised fashion. Extensive experiments have been conducted to demonstrate the superior performance in terms of both the acceleration of parameter fitting and the significantly improved prediction accuracy on multiple graph-structured datasets.",
"title": ""
}
] | scidocsrr |
f9657119e4fdea6594c89addb1fd6be3 | On the wafer/pad friction of chemical-mechanical planarization (CMP) processes - Part I: modeling and analysis | [
{
"docid": "d1bd5406b31cec137860a73b203d6bef",
"text": "A chemical-mechanical planarization (CMP) model based on lubrication theory is developed which accounts for pad compressibility, pad porosity and means of slurry delivery. Slurry ®lm thickness and velocity distributions between the pad and the wafer are predicted using the model. Two regimes of CMP operation are described: the lubrication regime (for ,40±70 mm slurry ®lm thickness) and the contact regime (for thinner ®lms). These regimes are identi®ed for two different pads using experimental copper CMP data and the predictions of the model. The removal rate correlation based on lubrication and mass transport theory agrees well with our experimental data in the lubrication regime. q 2000 Elsevier Science S.A. All rights reserved.",
"title": ""
},
{
"docid": "e03795645ca53f6d4f903ff8ff227054",
"text": "This paper presents the experimental validation and some application examples of the proposed wafer/pad friction models for linear chemical-mechanical planarization (CMP) processes in the companion paper. An experimental setup of a linear CMP polisher is first presented and some polishing processes are then designed for validation of the wafer/pad friction modeling and analysis. The friction torques of both the polisher spindle and roller systems are used to monitor variations of the friction coefficient in situ . Verification of the friction model under various process parameters is presented. Effects of pad conditioning and the wafer film topography on wafer/pad friction are experimentally demonstrated. Finally, several application examples are presented showing the use of the roller motor current measurement for real-time process monitoring and control.",
"title": ""
}
] | [
{
"docid": "c5c64d7fcd9b4804f7533978026dcfbd",
"text": "This paper presents a new method to control multiple micro-scale magnetic agents operating in close proximity to each other for applications in microrobotics. Controlling multiple magnetic microrobots close to each other is difficult due to magnetic interactions between the agents, and here we seek to control those interactions for the creation of desired multi-agent formations. We use the fact that all magnetic agents orient to the global input magnetic field to modulate the local attraction-repulsion forces between nearby agents. Here we study these controlled interaction magnetic forces for agents at a water-air interface and devise two controllers to regulate the inter-agent spacing and heading of the set, for motion in two dimensions. Simulation and experimental demonstrations show the feasibility of the idea and its potential for the completion of complex tasks using teams of microrobots. Average tracking error of less than 73 μm and 14° is accomplished for the regulation of the inter-agent space and the pair heading angle, respectively, for identical disk-shape agents with nominal radius of 500 μm and thickness of 80 μm operating within several body-lengths of each other.",
"title": ""
},
{
"docid": "5dfda76bf2065850492406fdf7cfed81",
"text": "We present a method for object categorization in real-world scenes. Following a common consensus in the field, we do not assume that a figure-ground segmentation is available prior to recognition. However, in contrast to most standard approaches for object class recognition, our approach automatically segments the object as a result of the categorization. This combination of recognition and segmentation into one process is made possible by our use of an Implicit Shape Model, which integrates both capabilities into a common probabilistic framework. This model can be thought of as a non-parametric approach which can easily handle configurations of large numbers of object parts. In addition to the recognition and segmentation result, it also generates a per-pixel confidence measure specifying the area that supports a hypothesis and how much it can be trusted. We use this confidence to derive a natural extension of the approach to handle multiple objects in a scene and resolve ambiguities between overlapping hypotheses with an MDL-based criterion. In addition, we present an extensive evaluation of our method on a standard dataset for car detection and compare its performance to existing methods from the literature. Our results show that the proposed method outperforms previously published methods while needing one order of magnitude less training examples. Finally, we present results for articulated objects, which show that the proposed method can categorize and segment unfamiliar objects in different articulations and with widely varying texture patterns, even under significant partial occlusion.",
"title": ""
},
{
"docid": "2282c06ea5e203b7e94095334bba05b9",
"text": "Exploring and surveying the world has been an important goal of humankind for thousands of years. Entering the 21st century, the Earth has almost been fully digitally mapped. Widespread deployment of GIS (Geographic Information Systems) technology and a tremendous increase of both satellite and street-level mapping over the last decade enables the public to view large portions of the world using computer applications such as Bing Maps or Google Earth.",
"title": ""
},
{
"docid": "61a2b0e51b27f46124a8042d59c0f022",
"text": "We address the highly challenging problem of real-time 3D hand tracking based on a monocular RGB-only sequence. Our tracking method combines a convolutional neural network with a kinematic 3D hand model, such that it generalizes well to unseen data, is robust to occlusions and varying camera viewpoints, and leads to anatomically plausible as well as temporally smooth hand motions. For training our CNN we propose a novel approach for the synthetic generation of training data that is based on a geometrically consistent image-to-image translation network. To be more specific, we use a neural network that translates synthetic images to \"real\" images, such that the so-generated images follow the same statistical distribution as real-world hand images. For training this translation network we combine an adversarial loss and a cycle-consistency loss with a geometric consistency loss in order to preserve geometric properties (such as hand pose) during translation. We demonstrate that our hand tracking system outperforms the current state-of-the-art on challenging RGB-only footage.",
"title": ""
},
{
"docid": "16fbebf500be1bf69027d3a35d85362b",
"text": "Mobile Edge Computing is an emerging technology that provides cloud and IT services within the close proximity of mobile subscribers. Traditional telecom network operators perform traffic control flow (forwarding and filtering of packets), but in Mobile Edge Computing, cloud servers are also deployed in each base station. Therefore, network operator has a great responsibility in serving mobile subscribers. Mobile Edge Computing platform reduces network latency by enabling computation and storage capacity at the edge network. It also enables application developers and content providers to serve context-aware services (such as collaborative computing) by using real time radio access network information. Mobile and Internet of Things devices perform computation offloading for compute intensive applications, such as image processing, mobile gaming, to leverage the Mobile Edge Computing services. In this paper, some of the promising real time Mobile Edge Computing application scenarios are discussed. Later on, a state-of-the-art research efforts on Mobile Edge Computing domain is presented. The paper also presents taxonomy of Mobile Edge Computing, describing key attributes. Finally, open research challenges in successful deployment of Mobile Edge Computing are identified and discussed.",
"title": ""
},
{
"docid": "e3524dfc6939238e9e2f49440c1090ea",
"text": "This paper presents modeling approaches for step-up grid-connected photovoltaic systems intended to provide analytical tools for control design. The first approach is based on a voltage source representation of the bulk capacitor interacting with the grid-connected inverter, which is a common model for large DC buses and closed-loop inverters. The second approach considers the inverter of a double-stage PV system as a Norton equivalent, which is widely accepted for open-loop inverters. In addition, the paper considers both ideal and realistic models for the DC/DC converter that interacts with the PV module, providing four mathematical models to cover a wide range of applications. The models are expressed in state space representation to simplify its use in analysis and control design, and also to be easily implemented in simulation software, e.g., Matlab. The PV system was analyzed to demonstrate the non-minimum phase condition for all the models, which is an important aspect to select the control technique. Moreover, the system observability and controllability were studied to define design criteria. Finally, the analytical results are illustrated by means of detailed simulations, and the paper results are validated in an experimental test bench.",
"title": ""
},
{
"docid": "33c453cec25a77e1bde4ecb353fc678b",
"text": "This article introduces the functional model of self-disclosure on social network sites by integrating a functional theory of self-disclosure and research on audience representations as situational cues for activating interpersonal goals. According to this model, people pursue strategic goals and disclose differently depending on social media affordances, and self-disclosure goals mediate between media affordances and disclosure intimacy. The results of the empirical study examining self-disclosure motivations and characteristics in Facebook status updates, wall posts, and private messaging lend support to this model and provide insights into the motivational drivers of self-disclosure on SNSs, helping to reconcile traditional views on self-disclosure and self-disclosing behaviors in new media contexts.",
"title": ""
},
{
"docid": "b113d45660629847afbd7faade1f3a71",
"text": "A wideband circularly polarized (CP) rectangular dielectric resonator antenna (DRA) is presented. An Archimedean spiral slot is used to excite the rectangular DRA for wideband CP radiation. The operating principle of the proposed antenna is based on using a broadband feeding structure to excite the DRA. A prototype of the proposed antenna is designed, fabricated, and measured. Good agreement between the simulated and measured results is attained, and a wide 3-dB axial-ratio (AR) bandwidth of 25.5% is achieved.",
"title": ""
},
{
"docid": "024e95f41a48e8409bd029c14e6acb3a",
"text": "This communication investigates the application of metamaterial absorber (MA) to waveguide slot antenna to reduce its radar cross section (RCS). A novel ultra-thin MA is presented, and its absorbing characteristics and mechanism are analyzed. The PEC ground plane of waveguide slot antenna is covered by this MA. As compared with the slot antenna with a PEC ground plane, the simulation and experiment results demonstrate that the monostatic and bistatic RCS of waveguide slot antenna are reduced significantly, and the performance of antenna is preserved simultaneously.",
"title": ""
},
{
"docid": "d9a9339672121fb6c3baeb51f11bfcd8",
"text": "The VISION (video indexing for searching over networks) digital video library system has been developed in our laboratory as a testbed for evaluating automatic and comprehensive mechanisms for video archive creation and content-based search, ®ltering and retrieval of video over local and wide area networks. In order to provide access to video footage within seconds of broadcast, we have developed a new pipelined digital video processing architecture which is capable of digitizing, processing, indexing and compressing video in real time on an inexpensive general purpose computer. These videos were automatically partitioned into short scenes using video, audio and closed-caption information. The resulting scenes are indexed based on their captions and stored in a multimedia database. A clientserver-based graphical user interface was developed to enable users to remotely search this archive and view selected video segments over networks of dierent bandwidths. Additionally, VISION classi®es the incoming videos with respect to a taxonomy of categories and will selectively send users videos which match their individual pro®les. # 1999 Elsevier Science Ltd. All rights reserved.",
"title": ""
},
{
"docid": "8466bed483a2774f7ccb44416364cf3f",
"text": "This paper proposes a semantics for incorporation that does not require the incorporated nominal to form a syntactic or morphological unit with the verb. Such a semantics is needed for languages like Hindi where semantic intuitions suggest the existence of incorporation but the evidence for syntactic fusion is not compelling. A lexical alternation between regular transitive and incorporating transitive verbs is proposed to derive the particular features of Hindi incorporation. The proposed semantics derives existential force without positing existential closure over the incorporated nominal. It also builds in modality into the meaning of the incorporating verb. This proposal is compared to two other recent proposals for the interpretation of incorporated arguments. The cross-linguistic implications of the analysis developed on the basis of Hindi are also discussed. 1. Identifying Incorporation The primary identification of the phenomenon known as noun incorporation is based on morphological and syntactic evidence about the shape and position of the nominal element involved. Consider the Inuit example in (1a) as well as the more familiar example of English compounding in (1b): 1a. Angunguaq eqalut-tur-p-u-q West Greenlandic -Inuit A-ABS salmon-eat-IND-[-tr]-3S Van Geenhoven (1998) “Angunguaq ate salmon.” b. Mary went apple-picking. The thematic object in (1a) occurs inside the verbal complex, and this affects transitivity. The verb has intransitive marking and the subject has absolutive case instead of the expected ergative. The nominal itself is a bare stem. There is no determiner, case marking, plurality or modification. In other words, an incorporated nominal is an N, not a DP or an NP. Similar comments apply to the English compound in (1b), though it should be noted that English does not have [V N+V] compounds. Though the reasons for this are not particularly well-understood at this time, my purpose in introducing English compounds here is for expository purposes only. A somewhat less obvious case of noun incorporation is attested in Niuean, discussed by Massam (2001). Niuean is an SVO language with obligatory V fronting. Massam notes that in addition to expect VSO order, there also exist sentences with VOS order in Niuean: 1 There can be external modifiers with (a limited set of) determiners, case marking etc. in what is known as the phenomenon of ‘doubling’.",
"title": ""
},
{
"docid": "ad004dd47449b977cd30f2454c5af77a",
"text": "Plants are a tremendous source for the discovery of new products of medicinal value for drug development. Today several distinct chemicals derived from plants are important drugs currently used in one or more countries in the world. Many of the drugs sold today are simple synthetic modifications or copies of the naturally obtained substances. The evolving commercial importance of secondary metabolites has in recent years resulted in a great interest in secondary metabolism, particularly in the possibility of altering the production of bioactive plant metabolites by means of tissue culture technology. Plant cell culture technologies were introduced at the end of the 1960’s as a possible tool for both studying and producing plant secondary metabolites. Different strategies, using an in vitro system, have been extensively studied to improve the production of plant chemicals. The focus of the present review is the application of tissue culture technology for the production of some important plant pharmaceuticals. Also, we describe the results of in vitro cultures and production of some important secondary metabolites obtained in our laboratory.",
"title": ""
},
{
"docid": "f21e55c7509124be8fabfb1d706d76aa",
"text": "CTCF and BORIS (CTCFL), two paralogous mammalian proteins sharing nearly identical DNA binding domains, are thought to function in a mutually exclusive manner in DNA binding and transcriptional regulation. Here we show that these two proteins co-occupy a specific subset of regulatory elements consisting of clustered CTCF binding motifs (termed 2xCTSes). BORIS occupancy at 2xCTSes is largely invariant in BORIS-positive cancer cells, with the genomic pattern recapitulating the germline-specific BORIS binding to chromatin. In contrast to the single-motif CTCF target sites (1xCTSes), the 2xCTS elements are preferentially found at active promoters and enhancers, both in cancer and germ cells. 2xCTSes are also enriched in genomic regions that escape histone to protamine replacement in human and mouse sperm. Depletion of the BORIS gene leads to altered transcription of a large number of genes and the differentiation of K562 cells, while the ectopic expression of this CTCF paralog leads to specific changes in transcription in MCF7 cells. We discover two functionally and structurally different classes of CTCF binding regions, 2xCTSes and 1xCTSes, revealed by their predisposition to bind BORIS. We propose that 2xCTSes play key roles in the transcriptional program of cancer and germ cells.",
"title": ""
},
{
"docid": "6e3e881cb1bb05101ad0f38e3f21e547",
"text": "Mechanical valves used for aortic valve replacement (AVR) continue to be associated with bleeding risks because of anticoagulation therapy, while bioprosthetic valves are at risk of structural valve deterioration requiring reoperation. This risk/benefit ratio of mechanical and bioprosthetic valves has led American and European guidelines on valvular heart disease to be consistent in recommending the use of mechanical prostheses in patients younger than 60 years of age. Despite these recommendations, the use of bioprosthetic valves has significantly increased over the last decades in all age groups. A systematic review of manuscripts applying propensity-matching or multivariable analysis to compare the usage of mechanical vs. bioprosthetic valves found either similar outcomes between the two types of valves or favourable outcomes with mechanical prostheses, particularly in younger patients. The risk/benefit ratio and choice of valves will be impacted by developments in valve designs, anticoagulation therapy, reducing the required international normalized ratio, and transcatheter and minimally invasive procedures. However, there is currently no evidence to support lowering the age threshold for implanting a bioprosthesis. Physicians in the Heart Team and patients should be cautious in pursuing more bioprosthetic valve use until its benefit is clearly proven in middle-aged patients.",
"title": ""
},
{
"docid": "5a4b73a1357809a547773fa8982172dd",
"text": "In this paper, we present a method for cup boundary detection from monocular colour fundus image to help quantify cup changes. The method is based on anatomical evidence such as vessel bends at cup boundary, considered relevant by glaucoma experts. Vessels are modeled and detected in a curvature space to better handle inter-image variations. Bends in a vessel are robustly detected using a region of support concept, which automatically selects the right scale for analysis. A reliable subset called r-bends is derived using a multi-stage strategy and a local splinetting is used to obtain the desired cup boundary. The method has been successfully tested on 133 images comprising 32 normal and 101 glaucomatous images against three glaucoma experts. The proposed method shows high sensitivity in cup to disk ratio-based glaucoma detection and local assessment of the detected cup boundary shows good consensus with the expert markings.",
"title": ""
},
{
"docid": "f3f70e5ba87399e9d44bda293a231399",
"text": "During natural disasters or crises, users on social media tend to easily believe contents of postings related to the events, and retweet the postings with hoping them to be reached to many other users. Unfortunately, there are malicious users who understand the tendency and post misinformation such as spam and fake messages with expecting wider propagation. To resolve the problem, in this paper we conduct a case study of 2013 Moore Tornado and Hurricane Sandy. Concretely, we (i) understand behaviors of these malicious users, (ii) analyze properties of spam, fake and legitimate messages, (iii) propose flat and hierarchical classification approaches, and (iv) detect both fake and spam messages with even distinguishing between them. Our experimental results show that our proposed approaches identify spam and fake messages with 96.43% accuracy and 0.961 F-measure.",
"title": ""
},
{
"docid": "0ce0db75982c205b581bc24060b9e2a4",
"text": "Maxim Gumin's WaveFunctionCollapse (WFC) algorithm is an example-driven image generation algorithm emerging from the craft practice of procedural content generation. In WFC, new images are generated in the style of given examples by ensuring every local window of the output occurs somewhere in the input. Operationally, WFC implements a non-backtracking, greedy search method. This paper examines WFC as an instance of constraint solving methods. We trace WFC's explosive influence on the technical artist community, explain its operation in terms of ideas from the constraint solving literature, and probe its strengths by means of a surrogate implementation using answer set programming.",
"title": ""
},
{
"docid": "16ee3eb990a49bdff840609ae79f26e3",
"text": "Driven by the prominent thermal signature of humans and following the growing availability of unmanned aerial vehicles (UAVs), more and more research efforts have been focusing on the detection and tracking of pedestrians using thermal infrared images recorded from UAVs. However, pedestrian detection and tracking from the thermal images obtained from UAVs pose many challenges due to the low-resolution of imagery, platform motion, image instability and the relatively small size of the objects. This research tackles these challenges by proposing a pedestrian detection and tracking system. A two-stage blob-based approach is first developed for pedestrian detection. This approach first extracts pedestrian blobs using the regional gradient feature and geometric constraints filtering and then classifies the detected blobs by using a linear Support Vector Machine (SVM) with a hybrid descriptor, which sophisticatedly combines Histogram of Oriented Gradient (HOG) and Discrete Cosine Transform (DCT) features in order to achieve accurate detection. This research further proposes an approach for pedestrian tracking. This approach employs the feature tracker with the update of detected pedestrian location to track pedestrian objects from the registered videos and extracts the motion trajectory data. The proposed detection and tracking approaches have been evaluated by multiple different datasets, and the results illustrate the effectiveness of the proposed methods. This research is expected to significantly benefit many transportation applications, such as the multimodal traffic performance measure, pedestrian behavior study and pedestrian-vehicle crash analysis. Future work will focus on using fused thermal and visual images to further improve the detection efficiency and effectiveness.",
"title": ""
},
{
"docid": "2a717b823caaaa0187d25b04305f13ee",
"text": "BACKGROUND\nDo peripersonal space for acting on objects and interpersonal space for interacting with con-specifics share common mechanisms and reflect the social valence of stimuli? To answer this question, we investigated whether these spaces refer to a similar or different physical distance.\n\n\nMETHODOLOGY\nParticipants provided reachability-distance (for potential action) and comfort-distance (for social processing) judgments towards human and non-human virtual stimuli while standing still (passive) or walking toward stimuli (active).\n\n\nPRINCIPAL FINDINGS\nComfort-distance was larger than other conditions when participants were passive, but reachability and comfort distances were similar when participants were active. Both spaces were modulated by the social valence of stimuli (reduction with virtual females vs males, expansion with cylinder vs robot) and the gender of participants.\n\n\nCONCLUSIONS\nThese findings reveal that peripersonal reaching and interpersonal comfort spaces share a common motor nature and are sensitive, at different degrees, to social modulation. Therefore, social processing seems embodied and grounded in the body acting in space.",
"title": ""
},
{
"docid": "3a501184ca52dedde44e79d2c66e78df",
"text": "China’s New Silk Road initiative is a multistate commercial project as grandiose as it is ambitious. Comprised of an overland economic “belt” and a maritime transit component, it envisages the development of a trade network traversing numerous countries and continents. Major investments in infrastructure are to establish new commercial hubs along the route, linking regions together via railroads, ports, energy transit systems, and technology. A relatively novel concept introduced by China’s President Xi Jinping in 2013, several projects related to the New Silk Road initiative—also called “One Belt, One Road” (OBOR, or B&R)—are being planned, are under construction, or have been recently completed. The New Silk Road is a fluid concept in its formative stages: it encompasses a variety of projects and is all-inclusive in terms of countries welcomed to participate. For these reasons, it has been labeled an abstract or visionary project. However, those in the region can attest that the New Silk Road is a reality, backed by Chinese hard currency. Thus, while Washington continues to deliberate on an overarching policy toward Asia, Beijing is making inroads—literally and figuratively— across the region and beyond.",
"title": ""
}
] | scidocsrr |
6ab795137afdadc35ae3244170ed0ea7 | Accurate Monocular Visual-inertial SLAM using a Map-assisted EKF Approach | [
{
"docid": "2bbbd2d1accca21cdb614a0324aa1a0d",
"text": "We propose a novel direct visual-inertial odometry method for stereo cameras. Camera pose, velocity and IMU biases are simultaneously estimated by minimizing a combined photometric and inertial energy functional. This allows us to exploit the complementary nature of vision and inertial data. At the same time, and in contrast to all existing visual-inertial methods, our approach is fully direct: geometry is estimated in the form of semi-dense depth maps instead of manually designed sparse keypoints. Depth information is obtained both from static stereo - relating the fixed-baseline images of the stereo camera - and temporal stereo - relating images from the same camera, taken at different points in time. We show that our method outperforms not only vision-only or loosely coupled approaches, but also can achieve more accurate results than state-of-the-art keypoint-based methods on different datasets, including rapid motion and significant illumination changes. In addition, our method provides high-fidelity semi-dense, metric reconstructions of the environment, and runs in real-time on a CPU.",
"title": ""
},
{
"docid": "c5cc4da2906670c30fc0bac3040217bd",
"text": "Many popular problems in robotics and computer vision including various types of simultaneous localization and mapping (SLAM) or bundle adjustment (BA) can be phrased as least squares optimization of an error function that can be represented by a graph. This paper describes the general structure of such problems and presents g2o, an open-source C++ framework for optimizing graph-based nonlinear error functions. Our system has been designed to be easily extensible to a wide range of problems and a new problem typically can be specified in a few lines of code. The current implementation provides solutions to several variants of SLAM and BA. We provide evaluations on a wide range of real-world and simulated datasets. The results demonstrate that while being general g2o offers a performance comparable to implementations of state-of-the-art approaches for the specific problems.",
"title": ""
}
] | [
{
"docid": "a4d89f698e3049adc70bcd51b26878cc",
"text": "The design and measured results of a 2 times 2 microstrip line fed U-slot rectangular antenna array are presented. The U-slot patches and the feeding network are placed on the same layer, resulting in a very simple structure. The advantage of the microstrip line fed U-slot patch is that it is easy to form the array. An impedance bandwidth (VSWR < 2) of 18% ranging from 5.65 GHz to 6.78 GHz is achieved. The radiation performance including radiation pattern, cross polarization, and gain is also satisfactory within this bandwidth. The measured peak gain of the array is 11.5 dBi. The agreement between simulated results and the measurement ones is good. The 2 times 2 array may be used as a module to form larger array.",
"title": ""
},
{
"docid": "49f30c4bb559ecba9ec5ad148a974bb1",
"text": "The proliferation of distributed energy resources has prompted interest in the expansion of DC power systems. One critical technological limitation that hinders this expansion is the absence of high step-down and high step-up DC converters for interconnecting DC systems. This work attempts to address the latter of these limitations. This paper presents a new transformerless high boost DC-DC converter intended for use as an interconnect between DC systems. With a conversion ratio of 1:10, the converter offers significantly higher boost ratio than the conventional non-isolated boost converter. It is designed to operate at medium to high voltage (>; 1kV), and provides high voltage dc/dc gain (>;5). Based on a current fed resonant topology, the design is well matched to available IGBT switch technology that enables use of relatively high switching frequencies yet accommodates the IGBTs inability to provide reverse blocking functionality. An advanced steady state model suitable for analysis of this converter is presented together with an experimental evaluation of the converter.",
"title": ""
},
{
"docid": "6b9d5cbdf91d792d60621da0bb45a303",
"text": "AR systems pose potential security concerns that should be addressed before the systems become widespread.",
"title": ""
},
{
"docid": "08a72844e8a974505b28527ee2fa3ee0",
"text": "Perfidy is the impersonation of civilians during armed conflict. It is generally outlawed by the laws of war such as the Geneva Conventions as its practice makes wars more dangerous for civilians. Cyber perfidy can be defined as malicious software or hardware masquerading as ordinary civilian software or hardware. We argue that it is also banned by the laws of war in cases where such cyber infrastructure is essential to normal civilian activity. This includes tampering with critical parts of operating systems and security software. We discuss possible targets of cyber perfidy, possible objections to the notion, and possible steps towards international agreements about it. This paper appeared in the Routledge Handbook of War and Ethics as chapter 29, ed. N. Evans, 2013.",
"title": ""
},
{
"docid": "f42176869d6946c4bc8f6ce8d713406d",
"text": "Design of self-adaptive software-intensive Cyber-Physical Systems (siCPS) operating in dynamic environments is a significant challenge when a sufficient level of dependability is required. This stems partly from the fact that the concerns of selfadaptivity and dependability are to an extent contradictory. In this paper, we introduce IRM-SA (Invariant Refinement Method for Self-Adaptation) – a design method and associated formally grounded model targeting siCPS – that addresses self-adaptivity and supports dependability by providing traceability between system requirements, distinct situations in the environment, and predefined configurations of system architecture. Additionally, IRM-SA allows for architecture self-adaptation at runtime and integrates the mechanism of predictive monitoring that deals with operational uncertainty. As a proof of concept, it was implemented in DEECo, a component framework that is based on dynamic ensembles of components. Furthermore, its feasibility was evaluated in experimental settings assuming decentralized system operation.",
"title": ""
},
{
"docid": "a54ac6991dce07d51ac028b8a249219e",
"text": "Rearrangement of immunoglobulin heavy-chain variable (VH) gene segments has been suggested to be regulated by interleukin 7 signaling in pro–B cells. However, the genetic evidence for this recombination pathway has been challenged. Furthermore, no molecular components that directly control VH gene rearrangement have been elucidated. Using mice deficient in the interleukin 7–activated transcription factor STAT5, we demonstrate here that STAT5 regulated germline transcription, histone acetylation and DNA recombination of distal VH gene segments. STAT5 associated with VH gene segments in vivo and was recruited as a coactivator with the transcription factor Oct-1. STAT5 did not affect the nuclear repositioning or compaction of the immunoglobulin heavy-chain locus. Therefore, STAT5 functions at a distinct step in regulating distal VH recombination in relation to the transcription factor Pax5 and histone methyltransferase Ezh2.",
"title": ""
},
{
"docid": "9734246f37e4e1361028a86eecdefec3",
"text": "Company disclosures greatly aid in the process of financial decision-making; therefore, they are consulted by financial investors and automated traders before exercising ownership in stocks. While humans are usually able to correctly interpret the content, the same is rarely true of computerized decision support systems, which struggle with the complexity and ambiguity of natural language. A possible remedy is represented by deep learning, which overcomes several shortcomings of traditional methods of text mining. For instance, recurrent neural networks, such as long shortterm memories, employ hierarchical structures, together with a large number of hidden layers, to automatically extract features from ordered sequences of words and capture highly non-linear relationships such as context-dependent meanings. However, deep learning has only recently started to receive traction, possibly because its performance is largely untested. Hence, this paper studies the use of deep neural networks for financial decision support. We additionally experiment with transfer learning, in which we pre-train the network on a different corpus with a length of 139.1 million words. Our results reveal a higher directional accuracy as compared to traditional machine learning when predicting stock price movements in response ∗Corresponding author. Mail: [email protected]; Tel: +49 761 203 2395; Fax: +49 761 203 2416. Email addresses: [email protected] (Mathias Kraus), [email protected] (Stefan Feuerriegel) Preprint submitted to Decision Support Systems July 6, 2018 ar X iv :1 71 0. 03 95 4v 1 [ cs .C L ] 1 1 O ct 2 01 7 to financial disclosures. Our work thereby helps to highlight the business value of deep learning and provides recommendations to practitioners and executives.",
"title": ""
},
{
"docid": "065ca3deb8cb266f741feb67e404acb5",
"text": "Recent research on deep convolutional neural networks (CNNs) has focused primarily on improving accuracy. For a given accuracy level, it is typically possible to identify multiple CNN architectures that achieve that accuracy level. With equivalent accuracy, smaller CNN architectures offer at least three advantages: (1) Smaller CNNs require less communication across servers during distributed training. (2) Smaller CNNs require less bandwidth to export a new model from the cloud to an autonomous car. (3) Smaller CNNs are more feasible to deploy on FPGAs and other hardware with limited memory. To provide all of these advantages, we propose a small CNN architecture called SqueezeNet. SqueezeNet achieves AlexNet-level accuracy on ImageNet with 50x fewer parameters. Additionally, with model compression techniques, we are able to compress SqueezeNet to less than 0.5MB (510× smaller than AlexNet). The SqueezeNet architecture is available for download here: https://github.com/DeepScale/SqueezeNet",
"title": ""
},
{
"docid": "775e78af608c07853af2e2c31a59bf5c",
"text": "This investigation compared the effect of high-volume (VOL) versus high-intensity (INT) resistance training on stimulating changes in muscle size and strength in resistance-trained men. Following a 2-week preparatory phase, participants were randomly assigned to either a high-volume (VOL; n = 14, 4 × 10-12 repetitions with ~70% of one repetition maximum [1RM], 1-min rest intervals) or a high-intensity (INT; n = 15, 4 × 3-5 repetitions with ~90% of 1RM, 3-min rest intervals) training group for 8 weeks. Pre- and posttraining assessments included lean tissue mass via dual energy x-ray absorptiometry, muscle cross-sectional area and thickness of the vastus lateralis (VL), rectus femoris (RF), pectoralis major, and triceps brachii muscles via ultrasound images, and 1RM strength in the back squat and bench press (BP) exercises. Blood samples were collected at baseline, immediately post, 30 min post, and 60 min postexercise at week 3 (WK3) and week 10 (WK10) to assess the serum testosterone, growth hormone (GH), insulin-like growth factor-1 (IGF1), cortisol, and insulin concentrations. Compared to VOL, greater improvements (P < 0.05) in lean arm mass (5.2 ± 2.9% vs. 2.2 ± 5.6%) and 1RM BP (14.8 ± 9.7% vs. 6.9 ± 9.0%) were observed for INT. Compared to INT, area under the curve analysis revealed greater (P < 0.05) GH and cortisol responses for VOL at WK3 and cortisol only at WK10. Compared to WK3, the GH and cortisol responses were attenuated (P < 0.05) for VOL at WK10, while the IGF1 response was reduced (P < 0.05) for INT. It appears that high-intensity resistance training stimulates greater improvements in some measures of strength and hypertrophy in resistance-trained men during a short-term training period.",
"title": ""
},
{
"docid": "65eb911b8cb0db4efd8f6d7c5370fd53",
"text": "This paper overviews the International Standards Organization Linguistic Annotation Framework (ISO LAF) developed in ISO TC37 SC4. We describe the XML serialization of ISO LAF, the Graph Annotation Format (GrAF) and discuss the rationale behind the various decisions that were made in determining the standard. We describe the structure of the GrAF headers in detail and provide multiple examples of GrAF representation for text and multi-media. Finally, we discuss the next steps for standardization of interchange formats for linguistic annotations.",
"title": ""
},
{
"docid": "3467f4be08c4b8d6cd556f04f324ce67",
"text": "Round robin arbiter (RRA) is a critical block in nowadays designs. It is widely found in System-on-chips and Network-on-chips. The need of an efficient RRA has increased extensively as it is a limiting performance block. In this paper, we deliver a comparative review between different RRA architectures found in literature. We also propose a novel efficient RRA architecture. The FPGA implementation results of the previous RRA architectures and our proposed one are given, that show the improvements of the proposed RRA.",
"title": ""
},
{
"docid": "6fc290610e99d66248c6d9e8c4fa4f02",
"text": "Ali, M. A. 2014. Understanding Cancer Mutations by Genome Editing. Digital Comprehensive Summaries of Uppsala Dissertations from the Faculty of Medicine 1054. 37 pp. Uppsala: Acta Universitatis Upsaliensis. ISBN 978-91-554-9106-2. Mutational analyses of cancer genomes have identified novel candidate cancer genes with hitherto unknown function in cancer. To enable phenotyping of mutations in such genes, we have developed a scalable technology for gene knock-in and knock-out in human somatic cells based on recombination-mediated construct generation and a computational tool to design gene targeting constructs. Using this technology, we have generated somatic cell knock-outs of the putative cancer genes ZBED6 and DIP2C in human colorectal cancer cells. In ZBED6 cells complete loss of functional ZBED6 was validated and loss of ZBED6 induced the expression of IGF2. Whole transcriptome and ChIP-seq analyses revealed relative enrichment of ZBED6 binding sites at upregulated genes as compared to downregulated genes. The functional annotation of differentially expressed genes revealed enrichment of genes related to cell cycle and cell proliferation and the transcriptional modulator ZBED6 affected the cell growth and cell cycle of human colorectal cancer cells. In DIP2Ccells, transcriptome sequencing revealed 780 differentially expressed genes as compared to their parental cells including the tumour suppressor gene CDKN2A. The DIP2C regulated genes belonged to several cancer related processes such as angiogenesis, cell structure and motility. The DIP2Ccells were enlarged and grew slower than their parental cells. To be able to directly compare the phenotypes of mutant KRAS and BRAF in colorectal cancers, we have introduced a KRAS allele in RKO BRAF cells. The expression of the mutant KRAS allele was confirmed and anchorage independent growth was restored in KRAS cells. The differentially expressed genes both in BRAF and KRAS mutant cells included ERBB, TGFB and histone modification pathways. Together, the isogenic model systems presented here can provide insights to known and novel cancer pathways and can be used for drug discovery.",
"title": ""
},
{
"docid": "043b51b50f17840508b0dfb92c895fc9",
"text": "Over the years, several security measures have been employed to combat the menace of insecurity of lives and property. This is done by preventing unauthorized entrance into buildings through entrance doors using conventional and electronic locks, discrete access code, and biometric methods such as the finger prints, thumb prints, the iris and facial recognition. In this paper, a prototyped door security system is designed to allow a privileged user to access a secure keyless door where valid smart card authentication guarantees an entry. The model consists of hardware module and software which provides a functionality to allow the door to be controlled through the authentication of smart card by the microcontroller unit. (",
"title": ""
},
{
"docid": "cb23b0837ad7e8eb4638d281dac1c175",
"text": "This study is conducted with the collaboration of the Malaysian Atomic Energy Licensing Board (AELB) in order to establish dose reference level (DRL) for computed tomography (CT) examinations in Malaysia. 426 examinations for standard adult patients and 26 examinations for paediatric patients comprising different types of CT examinations were collected from 33 out of 109 (30.3%) hospitals that have CT scanner in Malaysia. Measurements of Computed Tomography Dose Index in air (CTDIair) were done at every CT scanner in the hospitals that were involved in this study to investigate the scanner-specific values comparable to the data published by the ImPACT. Effective doses for all CT examinations were calculated using ImPACT Dosimetry Calculator for both ImPACT CTDIair and measured CTDIair values as a comparison. This study found that 4% to 22% of deviations between both values and the deviations represent the dose influence factors contributed by the CT machines. Every protocol used at certain CT examinations were analysed and it was found that tube potential (kVp) was not the main contribution for effective doses deviation. Other scanning parameters such as tube current – time product (mAs), scan length and nonstandardisation in some of the procedures were significant contributors to the effective dose deviations in most of the CT examinations. Effective doses calculated using ImPACT CTDIair were used to compare with other studies to provide an overview of CT practice in Malaysia. Effective doses for examinations of routine head, routine chest and pelvis are within the same range with studies conducted for the European guidelines, the UK and Taiwan. For the routine abdomen examination, the effective dose is still within the range compared to the studies for European guidelines and Taiwan, but 55.1% higher than the value from the study conducted in the UK. Lastly, this study also provided the third quartile values of effective doses for every CT",
"title": ""
},
{
"docid": "de73e8e382dddfba867068f1099b86fb",
"text": "Endophytes are fungi which infect plants without causing symptoms. Fungi belonging to this group are ubiquitous, and plant species not associated to fungal endophytes are not known. In addition, there is a large biological diversity among endophytes, and it is not rare for some plant species to be hosts of more than one hundred different endophytic species. Different mechanisms of transmission, as well as symbiotic lifestyles occur among endophytic species. Latent pathogens seem to represent a relatively small proportion of endophytic assemblages, also composed by latent saprophytes and mutualistic species. Some endophytes are generalists, being able to infect a wide range of hosts, while others are specialists, limited to one or a few hosts. Endophytes are gaining attention as a subject for research and applications in Plant Pathology. This is because in some cases plants associated to endophytes have shown increased resistance to plant pathogens, particularly fungi and nematodes. Several possible mechanisms by which endophytes may interact with pathogens are discussed in this review. Additional key words: biocontrol, biodiversity, symbiosis.",
"title": ""
},
{
"docid": "ac59e4ad40892da3d11d18eb40c09da8",
"text": "Recent advances in consumer-grade depth sensors have enable the collection of massive real-world 3D objects. Together with the rise of deep learning, it brings great potential for large-scale 3D object retrieval. In this challenge, we aim to study and evaluate the performance of 3D object retrieval algorithms with RGB-D data. To support the study, we expanded the previous ObjectNN dataset [HTT∗17] to include RGB-D objects from both SceneNN [HPN∗16] and ScanNet [DCS∗17], with the CAD models from ShapeNetSem [CFG∗15]. Evaluation results show that while the RGB-D to CAD retrieval problem is indeed challenging due to incomplete RGB-D reconstructions, it can be addressed to a certain extent using deep learning techniques trained on multi-view 2D images or 3D point clouds. The best method in this track has a 82% retrieval accuracy.",
"title": ""
},
{
"docid": "c3d5fa460c215fe85474252629a8dfae",
"text": "Metagenomic approaches are now commonly used in microbial ecology to study microbial communities in more detail, including many strains that cannot be cultivated in the laboratory. Bioinformatic analyses make it possible to mine huge metagenomic datasets and discover general patterns that govern microbial ecosystems. However, the findings of typical metagenomic and bioinformatic analyses still do not completely describe the ecology and evolution of microbes in their environments. Most analyses still depend on straightforward sequence similarity searches against reference databases. We herein review the current state of metagenomics and bioinformatics in microbial ecology and discuss future directions for the field. New techniques will allow us to go beyond routine analyses and broaden our knowledge of microbial ecosystems. We need to enrich reference databases, promote platforms that enable meta- or comprehensive analyses of diverse metagenomic datasets, devise methods that utilize long-read sequence information, and develop more powerful bioinformatic methods to analyze data from diverse perspectives.",
"title": ""
},
{
"docid": "2eac07edabf06cc2d6ab6c93aa9a40ba",
"text": "We introduce here a new dual-layer multibeam antenna with a folded Rotman lens used as a compact beam forming network in SIW technology. The objective is to reduce the overall size of the antenna system by folding the Rotman lens on two layers along the array port contour and using a transition based on an exotic reflector and several coupling vias holes. To validate the proposed concepts, an antenna system has been designed at 24.15 GHz. The radiating structure is a SIW slotted waveguide array made of fifteen resonant waveguides. The simulated results show very good scanning performances over ±47°. It is also demonstrated that the proposed transition can lead to a size reduction of about 50% for the lens, and more than 33% for the overall size of the antenna.",
"title": ""
},
{
"docid": "c75836bf10114bd568745dfaba611be0",
"text": "The present paper continues our investigations in the field of Supercapacitors or Electrochemical Double Layer Capacitors, briefly named EDLCs. The series connection of EDLCs is usual in order to obtain higher voltage levels. The inherent uneven state of charge (SOC) and manufacturing dispersions determine during charging at constant current that one of the capacitors reaches first the rated voltage levels and could, by further charging, be damaged. The balancing circuit with resistors and transistors used to bypass the charging current can be improved using the proposed circuit. We present here a complex variant, based on integrated circuit acting similar to a microcontroller. The circuit is adapted from the circuits investigated in the last 7–8 years for the batteries, especially for Lithium-ion type. The test board built around the circuit is performant, energy efficient and can be further improved to ensure the balancing control for larger capacitances.",
"title": ""
},
{
"docid": "46e81dc6b3b32f61471b91f71672a80f",
"text": "The sparsity of images in a fixed analytic transform domain or dictionary such as DCT or Wavelets has been exploited in many applications in image processing including image compression. Recently, synthesis sparsifying dictionaries that are directly adapted to the data have become popular in image processing. However, the idea of learning sparsifying transforms has received only little attention. We propose a novel problem formulation for learning doubly sparse transforms for signals or image patches. These transforms are a product of a fixed, fast analytic transform such as the DCT, and an adaptive matrix constrained to be sparse. Such transforms can be learnt, stored, and implemented efficiently. We show the superior promise of our approach as compared to analytical sparsifying transforms such as DCT for image representation.",
"title": ""
}
] | scidocsrr |
86d43e0b4ae9c634e85aeec789baad8c | A Brief Review of Network Embedding | [
{
"docid": "9fe198a6184a549ff63364e9782593d8",
"text": "Node embedding techniques have gained prominence since they produce continuous and low-dimensional features, which are effective for various tasks. Most existing approaches learn node embeddings by exploring the structure of networks and are mainly focused on static non-attributed graphs. However, many real-world applications, such as stock markets and public review websites, involve bipartite graphs with dynamic and attributed edges, called attributed interaction graphs. Different from conventional graph data, attributed interaction graphs involve two kinds of entities (e.g. investors/stocks and users/businesses) and edges of temporal interactions with attributes (e.g. transactions and reviews). In this paper, we study the problem of node embedding in attributed interaction graphs. Learning embeddings in interaction graphs is highly challenging due to the dynamics and heterogeneous attributes of edges. Different from conventional static graphs, in attributed interaction graphs, each edge can have totally different meanings when the interaction is at different times or associated with different attributes. We propose a deep node embedding method called IGE (Interaction Graph Embedding). IGE is composed of three neural networks: an encoding network is proposed to transform attributes into a fixed-length vector to deal with the heterogeneity of attributes; then encoded attribute vectors interact with nodes multiplicatively in two coupled prediction networks that investigate the temporal dependency by treating incident edges of a node as the analogy of a sentence in word embedding methods. The encoding network can be specifically designed for different datasets as long as it is differentiable, in which case it can be trained together with prediction networks by back-propagation. We evaluate our proposed method and various comparing methods on four real-world datasets. The experimental results prove the effectiveness of the learned embeddings by IGE on both node clustering and classification tasks.",
"title": ""
},
{
"docid": "6b8329ef59c6811705688e48bf6c0c08",
"text": "Since the invention of word2vec, the skip-gram model has significantly advanced the research of network embedding, such as the recent emergence of the DeepWalk, LINE, PTE, and node2vec approaches. In this work, we show that all of the aforementioned models with negative sampling can be unified into the matrix factorization framework with closed forms. Our analysis and proofs reveal that: (1) DeepWalk empirically produces a low-rank transformation of a network's normalized Laplacian matrix; (2) LINE, in theory, is a special case of DeepWalk when the size of vertices' context is set to one; (3) As an extension of LINE, PTE can be viewed as the joint factorization of multiple networks» Laplacians; (4) node2vec is factorizing a matrix related to the stationary distribution and transition probability tensor of a 2nd-order random walk. We further provide the theoretical connections between skip-gram based network embedding algorithms and the theory of graph Laplacian. Finally, we present the NetMF method as well as its approximation algorithm for computing network embedding. Our method offers significant improvements over DeepWalk and LINE for conventional network mining tasks. This work lays the theoretical foundation for skip-gram based network embedding methods, leading to a better understanding of latent network representation learning.",
"title": ""
}
] | [
{
"docid": "5b3ca1cc607d2e8f0394371f30d9e83a",
"text": "We present a machine learning algorithm that takes as input a 2D RGB image and synthesizes a 4D RGBD light field (color and depth of the scene in each ray direction). For training, we introduce the largest public light field dataset, consisting of over 3300 plenoptic camera light fields of scenes containing flowers and plants. Our synthesis pipeline consists of a convolutional neural network (CNN) that estimates scene geometry, a stage that renders a Lambertian light field using that geometry, and a second CNN that predicts occluded rays and non-Lambertian effects. Our algorithm builds on recent view synthesis methods, but is unique in predicting RGBD for each light field ray and improving unsupervised single image depth estimation by enforcing consistency of ray depths that should intersect the same scene point.",
"title": ""
},
{
"docid": "4a572df21f3a8ebe3437204471a1fd10",
"text": "Whilst studies on emotion recognition show that genderdependent analysis can improve emotion classification performance, the potential differences in the manifestation of depression between male and female speech have yet to be fully explored. This paper presents a qualitative analysis of phonetically aligned acoustic features to highlight differences in the manifestation of depression. Gender-dependent analysis with phonetically aligned gender-dependent features are used for speech-based depression recognition. The presented experimental study reveals gender differences in the effect of depression on vowel-level features. Considering the experimental study, we also show that a small set of knowledge-driven gender-dependent vowel-level features can outperform state-of-the-art turn-level acoustic features when performing a binary depressed speech recognition task. A combination of these preselected gender-dependent vowel-level features with turn-level standardised openSMILE features results in additional improvement for depression recognition.",
"title": ""
},
{
"docid": "b9aaab241bab9c11ac38d6e9188b7680",
"text": "Find loads of the research methods in the social sciences book catalogues in this site as the choice of you visiting this page. You can also join to the website book library that will show you numerous books from any types. Literature, science, politics, and many more catalogues are presented to offer you the best book to find. The book that really makes you feels satisfied. Or that's the book that will save you from your job deadline.",
"title": ""
},
{
"docid": "bc06b540765ddf762dc8cb72cae7ad41",
"text": "We present a method to produce free, enormous corpora to train taggers for Named Entity Recognition (NER), the task of identifying and classifying names in text, often solved by statistical learning systems. Our approach utilises the text of Wikipedia, a free online encyclopedia, transforming links between Wikipedia articles into entity annotations. Having derived a baseline corpus, we found that altering Wikipedia’s links and identifying classes of capitalised non-entity terms would enable the corpus to conform more closely to gold-standard annotations, increasing performance by up to 32% F score. The evaluation of our method is novel since the training corpus is not usually a variable in NER experimentation. We therefore develop a number of methods for analysing and comparing training corpora. Gold-standard training corpora for NER perform poorly (F score up to 32% lower) when evaluated on test data from a different gold-standard corpus. Our Wikipedia-derived data can outperform manually-annotated corpora on this cross-corpus evaluation task by up to 7% on held-out test data. These experimental results show that Wikipedia is viable as a source of automatically-annotated training corpora, which have wide domain coverage applicable to a broad range of NLP applications.",
"title": ""
},
{
"docid": "157f8adc236a9d2079ea424c5cf40dcb",
"text": "As humans we are a highly social species: in order to coordinate our joint actions and assure successful communication, we use language skills to explicitly convey information to each other, and social abilities such as empathy or perspective taking to infer another person's emotions and mental state. The human cognitive capacity to draw inferences about other peoples' beliefs, intentions and thoughts has been termed mentalizing, theory of mind or cognitive perspective taking. This capacity makes it possible, for instance, to understand that people may have views that differ from our own. Conversely, the capacity to share the feelings of others is called empathy. Empathy makes it possible to resonate with others' positive and negative feelings alike--we can thus feel happy when we vicariously share the joy of others and we can share the experience of suffering when we empathize with someone in pain. Importantly, in empathy one feels with someone, but one does not confuse oneself with the other; that is, one still knows that the emotion one resonates with is the emotion of another. If this self-other distinction is not present, we speak of emotion contagion, a precursor of empathy that is already present in babies.",
"title": ""
},
{
"docid": "2b40c6f6a9fc488524c23e11cd57a00b",
"text": "An overview of the basics of metaphorical thought and language from the perspective of Neurocognition, the integrated interdisciplinary study of how conceptual thought and language work in the brain. The paper outlines a theory of metaphor circuitry and discusses how everyday reason makes use of embodied metaphor circuitry.",
"title": ""
},
{
"docid": "9441113599194d172b6f618058b2ba88",
"text": "Vegetable quality is frequently referred to size, shape, mass, firmness, color and bruises from which fruits can be classified and sorted. However, technological by small and middle producers implementation to assess this quality is unfeasible, due to high costs of software, equipment as well as operational costs. Based on these considerations, the proposal of this research is to evaluate a new open software that enables the classification system by recognizing fruit shape, volume, color and possibly bruises at a unique glance. The software named ImageJ, compatible with Windows, Linux and MAC/OS, is quite popular in medical research and practices, and offers algorithms to obtain the above mentioned parameters. The software allows calculation of volume, area, averages, border detection, image improvement and morphological operations in a variety of image archive formats as well as extensions by means of “plugins” written in Java.",
"title": ""
},
{
"docid": "022a63e994a74d3d0e7b04680c1cb77e",
"text": "Practitioners in Europe and the U.S. recently have proposed two distinct approaches to address what they believe are shortcomings of traditional budgeting practices. One approach advocates improving the budgeting process and primarily focuses on the planning problems with budgeting. The other advocates abandoning the budget and primarily focuses on the performance evaluation problems with budgeting. This paper provides an overview and research perspective on these two recent developments. We discuss why practitioners have become dissatisfied with budgets, describe the two distinct approaches, place them in a research context, suggest insights that may aid the practitioners, and use the practitioner perspectives to identify fruitful areas for research. INTRODUCTION Budgeting is the cornerstone of the management control process in nearly all organizations, but despite its widespread use, it is far from perfect. Practitioners express concerns about using budgets for planning and performance evaluation. The practitioners argue that budgets impede the allocation of organizational resources to their best uses and encourage myopic decision making and other dysfunctional budget games. They attribute these problems, in part, to traditional budgeting’s financial, top-down, commandand-control orientation as embedded in annual budget planning and performance evaluation processes (e.g., Schmidt 1992; Bunce et al. 1995; Hope and Fraser 1997, 2000, 2003; Wallander 1999; Ekholm and Wallin 2000; Marcino 2000; Jensen 2001). We demonstrate practitioners’ concerns with budgets by describing two practice-led developments: one advocating improving the budgeting process, the other abandoning it. These developments illustrate two points. First, they show practitioners’ concerns with budgeting problems that the scholarly literature has largely ignored while focusing instead 1 For example, Comshare (2000) surveyed financial executives about their current experience with their organizations’ budgeting processes. One hundred thirty of the 154 participants (84 percent) identified 332 frustrations with their organizations’ budgeting processes, an average of 2.6 frustrations per person. We acknowledge the many helpful suggestions by the reviewers, Bjorn Jorgensen, Murray Lindsay, Ken Merchant, and Mark Young. 96 Hansen, Otley, and Van der Stede Journal of Management Accounting Research, 2003 on more traditional issues like participative budgeting. Second, the two conflicting developments illustrate that firms face a critical decision regarding budgeting: maintain it, improve it, or abandon it? Our discussion has two objectives. First, we demonstrate the level of concern with budgeting in practice, suggesting its potential for continued scholarly research. Second, we wish to raise academics’ awareness of apparent disconnects between budgeting practice and research. We identify areas where prior research may aid the practitioners and, conversely, use the practitioners’ insights to suggest areas for research. In the second section, we review some of the most common criticisms of budgets in practice. The third section describes and analyzes the main thrust of two recent practiceled developments in budgeting. In the fourth section, we place these two practice developments in a research context and suggest research that may be relevant to the practitioners. The fifth section turns the tables by using the practitioner insights to offer new perspectives for research. In the sixth section, we conclude. PROBLEMS WITH BUDGETING IN PRACTICE The ubiquitous use of budgetary control is largely due to its ability to weave together all the disparate threads of an organization into a comprehensive plan that serves many different purposes, particularly performance planning and ex post evaluation of actual performance vis-à-vis the plan. Despite performing this integrative function and laying the basis for performance evaluation, budgetary control has many limitations, such as its longestablished and oft-researched susceptibility to induce budget games or dysfunctional behaviors (Hofstede 1967; Onsi 1973; Merchant 1985b; Lukka 1988). A recent report by Neely et al. (2001), drawn primarily from the practitioner literature, lists the 12 most cited weaknesses of budgetary control as: 1. Budgets are time-consuming to put together; 2. Budgets constrain responsiveness and are often a barrier to change; 3. Budgets are rarely strategically focused and often contradictory; 4. Budgets add little value, especially given the time required to prepare them; 5. Budgets concentrate on cost reduction and not value creation; 6. Budgets strengthen vertical command-and-control; 7. Budgets do not reflect the emerging network structures that organizations are adopting; 8. Budgets encourage gaming and perverse behaviors; 9. Budgets are developed and updated too infrequently, usually annually; 10. Budgets are based on unsupported assumptions and guesswork; 11. Budgets reinforce departmental barriers rather than encourage knowledge sharing; and 12. Budgets make people feel undervalued. 2 For example, in their review of nearly 2,000 research and professional articles in management accounting in the 1996–2000 period, Selto and Widener (2001) document several areas of ‘‘fit’’ and ‘‘misfit’’ between practice and research. They document that more research than practice exists in the area of participative budgeting and state that ‘‘[this] topic appears to be of little current, practical interest, but continues to attract research efforts, perhaps because of the interesting theoretical issues it presents.’’ Selto and Widener (2001) also document virtually no research on activity-based budgeting (one of the practice-led developments we discuss in this paper) and planning and forecasting, although these areas have grown in practice coverage each year during the 1996– 2000 period. Practice Developments in Budgeting 97 Journal of Management Accounting Research, 2003 While not all would agree with these criticisms, other recent critiques (e.g., Schmidt 1992; Hope and Fraser 1997, 2000, 2003; Ekholm and Wallin 2000; Marcino 2000; Jensen 2001) also support the perception of widespread dissatisfaction with budgeting in practice. We synthesize the sources of dissatisfaction as follows. Claims 1, 4, 9, and 10 relate to the recurring criticism that by the time budgets are used, their assumptions are typically outdated, reducing the value of the budgeting process. A more radical version of this criticism is that conventional budgets can never be valid because they cannot capture the uncertainty involved in rapidly changing environments (Wallender 1999). In more conceptual terms, the operation of a useful budgetary control system requires two related elements. First, there must be a high degree of operational stability so that the budget provides a valid plan for a reasonable period of time (typically the next year). Second, managers must have good predictive models so that the budget provides a reasonable performance standard against which to hold managers accountable (Berry and Otley 1980). Where these criteria hold, budgetary control is a useful control mechanism, but for organizations that operate in more turbulent environments, it becomes less useful (Samuelson 2000). Claims 2, 3, 5, 6, and 8 relate to another common criticism that budgetary controls impose a vertical command-and-control structure, centralize decision making, stifle initiative, and focus on cost reductions rather than value creation. As such, budgetary controls often impede the pursuit of strategic goals by supporting such mechanical practices as lastyear-plus budget setting and across-the-board cuts. Moreover, the budget’s exclusive focus on annual financial performance causes a mismatch with operational and strategic decisions that emphasize nonfinancial goals and cut across the annual planning cycle, leading to budget games involving skillful timing of revenues, expenditures, and investments (Merchant 1985a). Finally, claims 7, 11, and 12 reflect organizational and people-related budgeting issues. The critics argue that vertical, command-and-control, responsibility center-focused budgetary controls are incompatible with flat, network, or value chain-based organizational designs and impede empowered employees from making the best decisions (Hope and Fraser 2003). Given such a long list of problems and many calls for improvement, it seems odd that the vast majority of U.S. firms retain a formal budgeting process (97 percent of the respondents in Umapathy [1987]). One reason that budgets may be retained in most firms is because they are so deeply ingrained in an organization’s fabric (Scapens and Roberts 1993). ‘‘They remain a centrally coordinated activity (often the only one) within the business’’ (Neely et al. 2001, 9) and constitute ‘‘the only process that covers all areas of organizational activity’’ (Otley 1999). However, a more recent survey of Finnish firms found that although 25 percent are retaining their traditional budgeting system, 61 percent are actively upgrading their system, and 14 percent are either abandoning budgets or at least considering it (Ekholm and Wallin 2000). We discuss two practice-led developments that illustrate proposals to improve budgeting or to abandon it. Although the two developments reach different conclusions, both originated in the same organization, the Consortium for Advanced Manufacturing-International (CAM-I); one in 3 We note that there are several factors that inevitably contribute to the seemingly negative evaluation of budgetary controls. First, given information asymmetries, budgets operate under second-best conditions in most organizations. Second, information is costly. Finally, unlike the costs, the benefits of budgeting are indirect, and thus, less salient. 98 Hansen, Otley, and Van der Stede Journal of Management Accounting Research, 2003 the U.S. and the other in Europe. The U",
"title": ""
},
{
"docid": "97f54d4b04e54ddae85d2e0c9a0a6476",
"text": "We propose a novel and robust hashing paradigm that uses iterative geometric techniques and relies on observations that main geometric features within an image would approximately stay invariant under small perturbations. A key goal of this algorithm is to produce sufficiently randomized outputs which are unpredictable, thereby yielding properties akin to cryptographic MACs. This is a key component for robust multimedia identification and watermarking (for synchronization as well as content dependent key generation). Our algorithm withstands standard benchmark (e.g Stirmark) attacks provided they do not cause severe perceptually significant distortions. As verified by our detailed experiments, the approach is relatively media independent and works for",
"title": ""
},
{
"docid": "1459f6bf9ebf153277f49a0791e2cf6d",
"text": "Content popularity prediction finds application in many areas, including media advertising, content caching, movie revenue estimation, traffic management and macro-economic trends forecasting, to name a few. However, predicting this popularity is difficult due to, among others, the effects of external phenomena, the influence of context such as locality and relevance to users,and the difficulty of forecasting information cascades.\n In this paper we identify patterns of temporal evolution that are generalisable to distinct types of data, and show that we can (1) accurately classify content based on the evolution of its popularity over time and (2) predict the value of the content's future popularity. We verify the generality of our method by testing it on YouTube, Digg and Vimeo data sets and find our results to outperform the K-Means baseline when classifying the behaviour of content and the linear regression baseline when predicting its popularity.",
"title": ""
},
{
"docid": "f3f70e5ba87399e9d44bda293a231399",
"text": "During natural disasters or crises, users on social media tend to easily believe contents of postings related to the events, and retweet the postings with hoping them to be reached to many other users. Unfortunately, there are malicious users who understand the tendency and post misinformation such as spam and fake messages with expecting wider propagation. To resolve the problem, in this paper we conduct a case study of 2013 Moore Tornado and Hurricane Sandy. Concretely, we (i) understand behaviors of these malicious users, (ii) analyze properties of spam, fake and legitimate messages, (iii) propose flat and hierarchical classification approaches, and (iv) detect both fake and spam messages with even distinguishing between them. Our experimental results show that our proposed approaches identify spam and fake messages with 96.43% accuracy and 0.961 F-measure.",
"title": ""
},
{
"docid": "f6c3124f3824bcc836db7eae1b926d65",
"text": "Cloud balancing provides an organization with the ability to distribute application requests across any number of application deployments located in different data centers and through Cloud-computing providers. In this paper, we propose a load balancing methodMinsd (Minimize standard deviation of Cloud load method) and apply it on three levels control: PEs (Processing Elements), Hosts and Data Centers. Simulations on CloudSim are used to check its performance and its influence on makespan, communication overhead and throughput. A true log of a cluster also is used to test our method. Results indicate that our method not only gives good Cloud balancing but also ensures reducing makespan and communication overhead and enhancing throughput of the whole the system.",
"title": ""
},
{
"docid": "92e955705aa333923bb7b14af946fc2f",
"text": "This study examines the role of online daters’ physical attractiveness in their profile selfpresentation and, in particular, their use of deception. Sixty-nine online daters identified the deceptions in their online dating profiles and had their photograph taken in the lab. Independent judges rated the online daters’ physical attractiveness. Results show that the lower online daters’ attractiveness, the more likely they were to enhance their profile photographs and lie about their physical descriptors (height, weight, age). The association between attractiveness and deception did not extend to profile elements unrelated to their physical appearance (e.g., income, occupation), suggesting that their deceptions were limited and strategic. Results are discussed in terms of (a) evolutionary theories about the importance of physical attractiveness in the dating realm and (b) the technological affordances that allow online daters to engage in selective self-presentation.",
"title": ""
},
{
"docid": "082630a33c0cc0de0e60a549fc57d8e8",
"text": "Agricultural monitoring, especially in developing countries, can help prevent famine and support humanitarian efforts. A central challenge is yield estimation, i.e., predicting crop yields before harvest. We introduce a scalable, accurate, and inexpensive method to predict crop yields using publicly available remote sensing data. Our approach improves existing techniques in three ways. First, we forego hand-crafted features traditionally used in the remote sensing community and propose an approach based on modern representation learning ideas. We also introduce a novel dimensionality reduction technique that allows us to train a Convolutional Neural Network or Long-short Term Memory network and automatically learn useful features even when labeled training data are scarce. Finally, we incorporate a Gaussian Process component to explicitly model the spatio-temporal structure of the data and further improve accuracy. We evaluate our approach on county-level soybean yield prediction in the U.S. and show that it outperforms competing techniques.",
"title": ""
},
{
"docid": "b4d7fccccd7a80631f1190320cfeab9e",
"text": "BACKGROUND\nPatients on surveillance for clinical stage I (CSI) testicular cancer are counseled regarding their baseline risk of relapse. The conditional risk of relapse (cRR), which provides prognostic information on patients who have survived for a period of time without relapse, have not been determined for CSI testicular cancer.\n\n\nOBJECTIVE\nTo determine cRR in CSI testicular cancer.\n\n\nDESIGN, SETTING, AND PARTICIPANTS\nWe reviewed 1239 patients with CSI testicular cancer managed with surveillance at a tertiary academic centre between 1980 and 2014. OUTCOME MEASUREMENTS AND STATISTICAL ANALYSIS: cRR estimates were calculated using the Kaplan-Meier method. We stratified patients according to validated risk factors for relapse. We used linear regression to determine cRR trends over time.\n\n\nRESULTS AND LIMITATIONS\nAt orchiectomy, the risk of relapse within 5 yr was 42.4%, 17.3%, 20.3%, and 12.2% among patients with high-risk nonseminomatous germ cell tumor (NSGCT), low-risk NSGCT, seminoma with tumor size ≥3cm, and seminoma with tumor size <3cm, respectively. However, for patients without relapse within the first 2 yr of follow-up, the corresponding risk of relapse within the next 5 yr in the groups was 0.0%, 1.0% (95% confidence interval [CI] 0.3-1.7%), 5.6% (95% CI 3.1-8.2%), and 3.9% (95% CI 1.4-6.4%). Over time, cRR decreased (p≤0.021) in all models. Limitations include changes to surveillance protocols over time and few late relapses.\n\n\nCONCLUSIONS\nAfter 2 yr, the risk of relapse on surveillance for CSI testicular cancer is very low. Consideration should be given to adapting surveillance protocols to individualized risk of relapse based on cRR as opposed to static protocols based on baseline factors. This strategy could reduce the intensity of follow-up for the majority of patients.\n\n\nPATIENT SUMMARY\nOur study is the first to provide data on the future risk of relapse during surveillance for clinical stage I testicular cancer, given a patient has been without relapse for a specified period of time.",
"title": ""
},
{
"docid": "76f4d1051bcb75156f4fcf402b1ebf27",
"text": "Slowly but surely, Alzheimer's disease (AD) patients lose their memory and their cognitive abilities, and even their personalities may change dramatically. These changes are due to the progressive dysfunction and death of nerve cells that are responsible for the storage and processing of information. Although drugs can temporarily improve memory, at present there are no treatments that can stop or reverse the inexorable neurodegenerative process. But rapid progress towards understanding the cellular and molecular alterations that are responsible for the neuron's demise may soon help in developing effective preventative and therapeutic strategies.",
"title": ""
},
{
"docid": "187bbc30046f17b2030c9dbe3c800074",
"text": "To present a summary of current scientific evidence about the cannabinoid, cannabidiol (CBD) with regard to its relevance to epilepsy and other selected neuropsychiatric disorders. We summarize the presentations from a conference in which invited participants reviewed relevant aspects of the physiology, mechanisms of action, pharmacology, and data from studies with animal models and human subjects. Cannabis has been used to treat disease since ancient times. Δ(9) -Tetrahydrocannabinol (Δ(9) -THC) is the major psychoactive ingredient and CBD is the major nonpsychoactive ingredient in cannabis. Cannabis and Δ(9) -THC are anticonvulsant in most animal models but can be proconvulsant in some healthy animals. The psychotropic effects of Δ(9) -THC limit tolerability. CBD is anticonvulsant in many acute animal models, but there are limited data in chronic models. The antiepileptic mechanisms of CBD are not known, but may include effects on the equilibrative nucleoside transporter; the orphan G-protein-coupled receptor GPR55; the transient receptor potential of vanilloid type-1 channel; the 5-HT1a receptor; and the α3 and α1 glycine receptors. CBD has neuroprotective and antiinflammatory effects, and it appears to be well tolerated in humans, but small and methodologically limited studies of CBD in human epilepsy have been inconclusive. More recent anecdotal reports of high-ratio CBD:Δ(9) -THC medical marijuana have claimed efficacy, but studies were not controlled. CBD bears investigation in epilepsy and other neuropsychiatric disorders, including anxiety, schizophrenia, addiction, and neonatal hypoxic-ischemic encephalopathy. However, we lack data from well-powered double-blind randomized, controlled studies on the efficacy of pure CBD for any disorder. Initial dose-tolerability and double-blind randomized, controlled studies focusing on target intractable epilepsy populations such as patients with Dravet and Lennox-Gastaut syndromes are being planned. Trials in other treatment-resistant epilepsies may also be warranted. A PowerPoint slide summarizing this article is available for download in the Supporting Information section here.",
"title": ""
},
{
"docid": "6e8a9c37672ec575821da5c9c3145500",
"text": "As video games become increasingly popular pastimes, it becomes more important to understand how different individuals behave when they play these games. Previous research has focused mainly on behavior in massively multiplayer online role-playing games; therefore, in the current study we sought to extend on this research by examining the connections between personality traits and behaviors in video games more generally. Two hundred and nineteen university students completed measures of personality traits, psychopathic traits, and a questionnaire regarding frequency of different behaviors during video game play. A principal components analysis of the video game behavior questionnaire revealed four factors: Aggressing, Winning, Creating, and Helping. Each behavior subscale was significantly correlated with at least one personality trait. Men reported significantly more Aggressing, Winning, and Helping behavior than women. Controlling for participant sex, Aggressing was negatively correlated with Honesty–Humility, Helping was positively correlated with Agreeableness, and Creating was negatively correlated with Conscientiousness. Aggressing was also positively correlated with all psychopathic traits, while Winning and Creating were correlated with one psychopathic trait each. Frequency of playing video games online was positively correlated with the Aggressing, Winning, and Helping scales, but not with the Creating scale. The results of the current study provide support for previous research on personality and behavior in massively multiplayer online role-playing games. 2015 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "8c80129507b138d1254e39acfa9300fc",
"text": "Motivation\nText mining has become an important tool for biomedical research. The most fundamental text-mining task is the recognition of biomedical named entities (NER), such as genes, chemicals and diseases. Current NER methods rely on pre-defined features which try to capture the specific surface properties of entity types, properties of the typical local context, background knowledge, and linguistic information. State-of-the-art tools are entity-specific, as dictionaries and empirically optimal feature sets differ between entity types, which makes their development costly. Furthermore, features are often optimized for a specific gold standard corpus, which makes extrapolation of quality measures difficult.\n\n\nResults\nWe show that a completely generic method based on deep learning and statistical word embeddings [called long short-term memory network-conditional random field (LSTM-CRF)] outperforms state-of-the-art entity-specific NER tools, and often by a large margin. To this end, we compared the performance of LSTM-CRF on 33 data sets covering five different entity classes with that of best-of-class NER tools and an entity-agnostic CRF implementation. On average, F1-score of LSTM-CRF is 5% above that of the baselines, mostly due to a sharp increase in recall.\n\n\nAvailability and implementation\nThe source code for LSTM-CRF is available at https://github.com/glample/tagger and the links to the corpora are available at https://corposaurus.github.io/corpora/ .\n\n\nContact\[email protected].",
"title": ""
},
{
"docid": "eaf7b6b0cc18453538087cc90254dbd8",
"text": "We present a real-time system that renders antialiased hard shadows using irregular z-buffers (IZBs). For subpixel accuracy, we use 32 samples per pixel at roughly twice the cost of a single sample. Our system remains interactive on a variety of game assets and CAD models while running at 1080p and 2160p and imposes no constraints on light, camera or geometry, allowing fully dynamic scenes without precomputation. Unlike shadow maps we introduce no spatial or temporal aliasing, smoothly animating even subpixel shadows from grass or wires.\n Prior irregular z-buffer work relies heavily on GPU compute. Instead we leverage the graphics pipeline, including hardware conservative raster and early-z culling. We observe a duality between irregular z-buffer performance and shadow map quality; this allows common shadow map algorithms to reduce our cost. Compared to state-of-the-art ray tracers, we spawn similar numbers of triangle intersections per pixel yet completely rebuild our data structure in under 2 ms per frame.",
"title": ""
}
] | scidocsrr |
d85aa425e7c3ca40f0275b09af8446bf | A fuzzy spatial coherence-based approach to background/foreground separation for moving object detection | [
{
"docid": "00e8c142e7f059c10cd9eabdb78e0120",
"text": "Running average method and its modified version are two simple and fast methods for background modeling. In this paper, some weaknesses of running average method and standard background subtraction are mentioned. Then, a fuzzy approach for background modeling and background subtraction is proposed. For fuzzy background modeling, fuzzy running average is suggested. Background modeling and background subtraction algorithms are very commonly used in vehicle detection systems. To demonstrate the advantages of fuzzy running average and fuzzy background subtraction, these methods and their standard versions are compared in vehicle detection application. Experimental results show that fuzzy approach is relatively more accurate than classical approach.",
"title": ""
}
] | [
{
"docid": "4c5dd43f350955b283f1a04ddab52d41",
"text": "This thesis deals with interaction design for a class of upcoming computer technologies for human use characterized by being different from traditional desktop computers in their physical appearance and the contexts in which they are used. These are typically referred to as emerging technologies. Emerging technologies often imply interaction dissimilar from how computers are usually operated. This challenges the scope and applicability of existing knowledge about human-computer interaction design. The thesis focuses on three specific technologies: virtual reality, augmented reality and mobile computer systems. For these technologies, five themes are addressed: current focus of research, concepts, interaction styles, methods and tools. These themes inform three research questions, which guide the conducted research. The thesis consists of five published research papers and a summary. In the summary, current focus of research is addressed from the perspective of research methods and research purpose. Furthermore, the notions of human-computer interaction design and emerging technologies are discussed and two central distinctions are introduced. Firstly, interaction design is divided into two categories with focus on systems and processes respectively. Secondly, the three studied emerging technologies are viewed in relation to immersion into virtual space and mobility in physical space. These distinctions are used to relate the five paper contributions, each addressing one of the three studied technologies with focus on properties of systems or the process of creating them respectively. Three empirical sources contribute to the results. Experiments with interaction design inform the development of concepts and interaction styles suitable for virtual reality, augmented reality and mobile computer systems. Experiments with designing interaction inform understanding of how methods and tools support design processes for these technologies. Finally, a literature survey informs a review of existing research, and identifies current focus, limitations and opportunities for future research. The primary results of the thesis are: 1) Current research within human-computer interaction design for the studied emerging technologies focuses on building systems ad-hoc and evaluating them in artificial settings. This limits the generation of cumulative theoretical knowledge. 2) Interaction design for the emerging technologies studied requires the development of new suitable concepts and interaction styles. Suitable concepts describe unique properties and challenges of a technology. Suitable interaction styles respond to these challenges by exploiting the technology’s unique properties. 3) Designing interaction for the studied emerging technologies involves new use situations, a distance between development and target platforms and complex programming. Elements of methods exist, which are useful for supporting the design of interaction, but they are fragmented and do not support the process as a whole. The studied tools do not support the design process as a whole either but support aspects of interaction design by bridging the gulf between development and target platforms and providing advanced programming environments. Menneske-maskine interaktionsdesign for opkommende teknologier Virtual Reality, Augmented Reality og Mobile Computersystemer",
"title": ""
},
{
"docid": "b04ba2e942121b7a32451f0b0f690553",
"text": "Due to the growing number of vehicles on the roads worldwide, road traffic accidents are currently recognized as a major public safety problem. In this context, connected vehicles are considered as the key enabling technology to improve road safety and to foster the emergence of next generation cooperative intelligent transport systems (ITS). Through the use of wireless communication technologies, the deployment of ITS will enable vehicles to autonomously communicate with other nearby vehicles and roadside infrastructures and will open the door for a wide range of novel road safety and driver assistive applications. However, connecting wireless-enabled vehicles to external entities can make ITS applications vulnerable to various security threats, thus impacting the safety of drivers. This article reviews the current research challenges and opportunities related to the development of secure and safe ITS applications. It first explores the architecture and main characteristics of ITS systems and surveys the key enabling standards and projects. Then, various ITS security threats are analyzed and classified, along with their corresponding cryptographic countermeasures. Finally, a detailed ITS safety application case study is analyzed and evaluated in light of the European ETSI TC ITS standard. An experimental test-bed is presented, and several elliptic curve digital signature algorithms (ECDSA) are benchmarked for signing and verifying ITS safety messages. To conclude, lessons learned, open research challenges and opportunities are discussed. Electronics 2015, 4 381",
"title": ""
},
{
"docid": "9aa24f6e014ac5104c5b9ff68dc45576",
"text": "The development of social networks has led the public in general to find easy accessibility for communication with respect to rapid communication to each other at any time. Such services provide the quick transmission of information which is its positive side but its negative side needs to be kept in mind thereby misinformation can spread. Nowadays, in this era of digitalization, the validation of such information has become a real challenge, due to lack of information authentication method. In this paper, we design a framework for the rumors detection from the Facebook events data, which is based on inquiry comments. The proposed Inquiry Comments Detection Model (ICDM) identifies inquiry comments utilizing a rule-based approach which entails regular expressions to categorize the sentences as an inquiry into those starting with an intransitive verb (like is, am, was, will, would and so on) and also those sentences ending with a question mark. We set the threshold value to compare with the ratio of Inquiry to English comments and identify the rumors. We verified the proposed ICDM on labeled data, collected from snopes.com. Our experiments revealed that the proposed method achieved considerably well in comparison to the existing machine learning techniques. The proposed ICDM approach attained better results of 89% precision, 77% recall, and 82% F-measure. We are of the opinion that our experimental findings of this study will be useful for the worldwide adoption. Keywords—Social networks; rumors; inquiry comments; question identification",
"title": ""
},
{
"docid": "153f452486e2eacb9dc1cf95275dd015",
"text": "This paper presents a Fuzzy Neural Network (FNN) control system for a traveling-wave ultrasonic motor (TWUSM) driven by a dual mode modulation non-resonant driving circuit. First, the motor configuration and the proposed driving circuit of a TWUSM are introduced. To drive a TWUSM effectively, a novel driving circuit, that simultaneously employs both the driving frequency and phase modulation control scheme, is proposed to provide two-phase balance voltage for a TWUSM. Since the dynamic characteristics and motor parameters of the TWUSM are highly nonlinear and time-varying, a FNN control system is therefore investigated to achieve high-precision speed control. The proposed FNN control system incorporates neuro-fuzzy control and the driving frequency and phase modulation to solve the problem of nonlinearities and variations. The proposed control system is digitally implemented by a low-cost digital signal processor based microcontroller, hence reducing the system hardware size and cost. The effectiveness of the proposed driving circuit and control system is verified with hardware experiments under the occurrence of uncertainties. In addition, the advantages of the proposed control scheme are indicated in comparison with a conventional proportional-integral control system.",
"title": ""
},
{
"docid": "096bc66bb6f4c04109cf26d9d474421c",
"text": "A statistical analysis of full text downloads of articles in Elsevier's ScienceDirect covering all disciplines reveals large differences in download frequencies, their skewness, and their correlation with Scopus-based citation counts, between disciplines, journals, and document types. Download counts tend to be two orders of magnitude higher and less skewedly distributed than citations. A mathematical model based on the sum of two exponentials does not adequately capture monthly download counts. The degree of correlation at the article level within a journal is similar to that at the journal level in the discipline covered by that journal, suggesting that the differences between journals are to a large extent discipline-specific. Despite the fact that in all study journals download and citation counts per article positively correlate, little overlap may exist between the set of articles appearing in the top of the citation distribution and that with the most frequently downloaded ones. Usage and citation leaks, bulk downloading, differences between reader and author populations in a subject field, the type of document or its content, differences in obsolescence patterns between downloads and citations, different functions of reading and citing in the research process, all provide possible explanations of differences between download and citation distributions.",
"title": ""
},
{
"docid": "9728b73d9b5075b5b0ee878ddfc9379a",
"text": "The security research community has invested significant effort in improving the security of Android applications over the past half decade. This effort has addressed a wide range of problems and resulted in the creation of many tools for application analysis. In this article, we perform the first systematization of Android security research that analyzes applications, characterizing the work published in more than 17 top venues since 2010. We categorize each paper by the types of problems they solve, highlight areas that have received the most attention, and note whether tools were ever publicly released for each effort. Of the released tools, we then evaluate a representative sample to determine how well application developers can apply the results of our community’s efforts to improve their products. We find not only that significant work remains to be done in terms of research coverage but also that the tools suffer from significant issues ranging from lack of maintenance to the inability to produce functional output for applications with known vulnerabilities. We close by offering suggestions on how the community can more successfully move forward.",
"title": ""
},
{
"docid": "1585d7e1f1e6950949dc954c2d0bba51",
"text": "The state-of-the-art techniques for aspect-level sentiment analysis focus on feature modeling using a variety of deep neural networks (DNN). Unfortunately, their practical performance may fall short of expectations due to semantic complexity of natural languages. Motivated by the observation that linguistic hints (e.g. explicit sentiment words and shift words) can be strong indicators of sentiment, we present a joint framework, SenHint, which integrates the output of deep neural networks and the implication of linguistic hints into a coherent reasoning model based on Markov Logic Network (MLN). In SenHint, linguistic hints are used in two ways: (1) to identify easy instances, whose sentiment can be automatically determined by machine with high accuracy; (2) to capture implicit relations between aspect polarities. We also empirically evaluate the performance of SenHint on both English and Chinese benchmark datasets. Our experimental results show that SenHint can effectively improve accuracy compared with the state-of-the-art alternatives.",
"title": ""
},
{
"docid": "bf445955186e2f69f4ef182850090ffc",
"text": "The majority of online display ads are served through real-time bidding (RTB) --- each ad display impression is auctioned off in real-time when it is just being generated from a user visit. To place an ad automatically and optimally, it is critical for advertisers to devise a learning algorithm to cleverly bid an ad impression in real-time. Most previous works consider the bid decision as a static optimization problem of either treating the value of each impression independently or setting a bid price to each segment of ad volume. However, the bidding for a given ad campaign would repeatedly happen during its life span before the budget runs out. As such, each bid is strategically correlated by the constrained budget and the overall effectiveness of the campaign (e.g., the rewards from generated clicks), which is only observed after the campaign has completed. Thus, it is of great interest to devise an optimal bidding strategy sequentially so that the campaign budget can be dynamically allocated across all the available impressions on the basis of both the immediate and future rewards. In this paper, we formulate the bid decision process as a reinforcement learning problem, where the state space is represented by the auction information and the campaign's real-time parameters, while an action is the bid price to set. By modeling the state transition via auction competition, we build a Markov Decision Process framework for learning the optimal bidding policy to optimize the advertising performance in the dynamic real-time bidding environment. Furthermore, the scalability problem from the large real-world auction volume and campaign budget is well handled by state value approximation using neural networks. The empirical study on two large-scale real-world datasets and the live A/B testing on a commercial platform have demonstrated the superior performance and high efficiency compared to state-of-the-art methods.",
"title": ""
},
{
"docid": "63dcb42d456ab4b6512c47437e354f7b",
"text": "The deep learning revolution brought us an extensive array of neural network architectures that achieve state-of-the-art performance in a wide variety of Computer Vision tasks including among others classification, detection and segmentation. In parallel, we have also been observing an unprecedented demand in computational and memory requirements, rendering the efficient use of neural networks in low-powered devices virtually unattainable. Towards this end, we propose a threestage compression and acceleration pipeline that sparsifies, quantizes and entropy encodes activation maps of Convolutional Neural Networks. Sparsification increases the representational power of activation maps leading to both acceleration of inference and higher model accuracy. Inception-V3 and MobileNet-V1 can be accelerated by as much as 1.6× with an increase in accuracy of 0.38% and 0.54% on the ImageNet and CIFAR-10 datasets respectively. Quantizing and entropy coding the sparser activation maps lead to higher compression over the baseline, reducing the memory cost of the network execution. Inception-V3 and MobileNet-V1 activation maps, quantized to 16 bits, are compressed by as much as 6× with an increase in accuracy of 0.36% and 0.55% respectively.",
"title": ""
},
{
"docid": "023fa0ac94b2ea1740f1bbeb8de64734",
"text": "The establishment of an endosymbiotic relationship typically seems to be driven through complementation of the host's limited metabolic capabilities by the biochemical versatility of the endosymbiont. The most significant examples of endosymbiosis are represented by the endosymbiotic acquisition of plastids and mitochondria, introducing photosynthesis and respiration to eukaryotes. However, there are numerous other endosymbioses that evolved more recently and repeatedly across the tree of life. Recent advances in genome sequencing technology have led to a better understanding of the physiological basis of many endosymbiotic associations. This review focuses on endosymbionts in protists (unicellular eukaryotes). Selected examples illustrate the incorporation of various new biochemical functions, such as photosynthesis, nitrogen fixation and recycling, and methanogenesis, into protist hosts by prokaryotic endosymbionts. Furthermore, photosynthetic eukaryotic endosymbionts display a great diversity of modes of integration into different protist hosts. In conclusion, endosymbiosis seems to represent a general evolutionary strategy of protists to acquire novel biochemical functions and is thus an important source of genetic innovation.",
"title": ""
},
{
"docid": "2d492d66d0abee5d5dd41cf73a83e943",
"text": "Using a novel replacement gate SOI FinFET device structure, we have fabricated FinFETs with fin width (D<inf>Fin</inf>) of 4nm, fin pitch (FP) of 40nm, and gate length (L<inf>G</inf>) of 20nm. With this structure, we have achieved arrays of thousands of fins for D<inf>Fin</inf> down to 4nm with robust yield and structural integrity. We observe performance degradation, increased variability, and V<inf>T</inf> shift as D<inf>Fin</inf> is reduced. Capacitance measurements agree with quantum confinement behavior which has been predicted to pose a fundamental limit to scaling FinFETs below 10nm L<inf>G</inf>.",
"title": ""
},
{
"docid": "b3a775719d87c3837de671001c77568b",
"text": "Regularization of Deep Neural Networks (DNNs) for the sake of improving their generalization capability is important and challenging. The development in this line benefits theoretical foundation of DNNs and promotes their usability in different areas of artificial intelligence. In this paper, we investigate the role of Rademacher complexity in improving generalization of DNNs and propose a novel regularizer rooted in Local Rademacher Complexity (LRC). While Rademacher complexity is well known as a distribution-free complexity measure of function class that help boost generalization of statistical learning methods, extensive study shows that LRC, its counterpart focusing on a restricted function class, leads to sharper convergence rates and potential better generalization given finite training sample. Our LRC based regularizer is developed by estimating the complexity of the function class centered at the minimizer of the empirical loss of DNNs. Experiments on various types of network architecture demonstrate the effectiveness of LRC regularization in improving generalization. Moreover, our method features the state-of-the-art result on the CIFAR-10 dataset with network architecture found by neural architecture search.",
"title": ""
},
{
"docid": "c41038d0e3cf34e8a1dcba07a86cce9a",
"text": "Alzheimer's disease (AD) is a major neurodegenerative disease and is one of the most common cause of dementia in older adults. Among several factors, neuroinflammation is known to play a critical role in the pathogenesis of chronic neurodegenerative diseases. In particular, studies of brains affected by AD show a clear involvement of several inflammatory pathways. Furthermore, depending on the brain regions affected by the disease, the nature and the effect of inflammation can vary. Here, in order to shed more light on distinct and common features of inflammation in different brain regions affected by AD, we employed a computational approach to analyze gene expression data of six site-specific neuronal populations from AD patients. Our network based computational approach is driven by the concept that a sustained inflammatory environment could result in neurotoxicity leading to the disease. Thus, our method aims to infer intracellular signaling pathways/networks that are likely to be constantly activated or inhibited due to persistent inflammatory conditions. The computational analysis identified several inflammatory mediators, such as tumor necrosis factor alpha (TNF-a)-associated pathway, as key upstream receptors/ligands that are likely to transmit sustained inflammatory signals. Further, the analysis revealed that several inflammatory mediators were mainly region specific with few commonalities across different brain regions. Taken together, our results show that our integrative approach aids identification of inflammation-related signaling pathways that could be responsible for the onset or the progression of AD and can be applied to study other neurodegenerative diseases. Furthermore, such computational approaches can enable the translation of clinical omics data toward the development of novel therapeutic strategies for neurodegenerative diseases.",
"title": ""
},
{
"docid": "4cbec8031ea32380675b1d8dff107cab",
"text": "Quorum-sensing bacteria communicate with extracellular signal molecules called autoinducers. This process allows community-wide synchronization of gene expression. A screen for additional components of the Vibrio harveyi and Vibrio cholerae quorum-sensing circuits revealed the protein Hfq. Hfq mediates interactions between small, regulatory RNAs (sRNAs) and specific messenger RNA (mRNA) targets. These interactions typically alter the stability of the target transcripts. We show that Hfq mediates the destabilization of the mRNA encoding the quorum-sensing master regulators LuxR (V. harveyi) and HapR (V. cholerae), implicating an sRNA in the circuit. Using a bioinformatics approach to identify putative sRNAs, we identified four candidate sRNAs in V. cholerae. The simultaneous deletion of all four sRNAs is required to stabilize hapR mRNA. We propose that Hfq, together with these sRNAs, creates an ultrasensitive regulatory switch that controls the critical transition into the high cell density, quorum-sensing mode.",
"title": ""
},
{
"docid": "329487a07d4f71e30b64da5da1c6684a",
"text": "The purpose was to investigate the effect of 25 weeks heavy strength training in young elite cyclists. Nine cyclists performed endurance training and heavy strength training (ES) while seven cyclists performed endurance training only (E). ES, but not E, resulted in increases in isometric half squat performance, lean lower body mass, peak power output during Wingate test, peak aerobic power output (W(max)), power output at 4 mmol L(-1)[la(-)], mean power output during 40-min all-out trial, and earlier occurrence of peak torque during the pedal stroke (P < 0.05). ES achieved superior improvements in W(max) and mean power output during 40-min all-out trial compared with E (P < 0.05). The improvement in 40-min all-out performance was associated with the change toward achieving peak torque earlier in the pedal stroke (r = 0.66, P < 0.01). Neither of the groups displayed alterations in VO2max or cycling economy. In conclusion, heavy strength training leads to improved cycling performance in elite cyclists as evidenced by a superior effect size of ES training vs E training on relative improvements in power output at 4 mmol L(-1)[la(-)], peak power output during 30-s Wingate test, W(max), and mean power output during 40-min all-out trial.",
"title": ""
},
{
"docid": "a059fc50eb0e4cab21b04a75221b3160",
"text": "This paper presents the design of an X-band active antenna self-oscillating down-converter mixer in substrate integrated waveguide technology (SIW). Electromagnetic analysis is used to design a SIW cavity backed patch antenna with resonance at 9.9 GHz used as the receiving antenna, and subsequently harmonic balance analysis combined with optimization techniques are used to synthesize a self-oscillating mixer with oscillating frequency of 6.525 GHz. The conversion gain is optimized for the mixing product involving the second harmonic of the oscillator and the RF input signal, generating an IF frequency of 3.15 GHz to have conversion gain in at least 600 MHz bandwidth around the IF frequency. The active antenna circuit finds application in compact receiver front-end modules as well as active self-oscillating mixer arrays.",
"title": ""
},
{
"docid": "d5e5d79b8a06d4944ee0c3ddcd84ce4c",
"text": "Recent years have observed a significant progress in information retrieval and natural language processing with deep learning technologies being successfully applied into almost all of their major tasks. The key to the success of deep learning is its capability of accurately learning distributed representations (vector representations or structured arrangement of them) of natural language expressions such as sentences, and effectively utilizing the representations in the tasks. This tutorial aims at summarizing and introducing the results of recent research on deep learning for information retrieval, in order to stimulate and foster more significant research and development work on the topic in the future.\n The tutorial mainly consists of three parts. In the first part, we introduce the fundamental techniques of deep learning for natural language processing and information retrieval, such as word embedding, recurrent neural networks, and convolutional neural networks. In the second part, we explain how deep learning, particularly representation learning techniques, can be utilized in fundamental NLP and IR problems, including matching, translation, classification, and structured prediction. In the third part, we describe how deep learning can be used in specific application tasks in details. The tasks are search, question answering (from either documents, database, or knowledge base), and image retrieval.",
"title": ""
},
{
"docid": "094906bcd076ae3207ba04755851c73a",
"text": "The paper describes our approach for SemEval-2018 Task 1: Affect Detection in Tweets. We perform experiments with manually compelled sentiment lexicons and word embeddings. We test their performance on twitter affect detection task to determine which features produce the most informative representation of a sentence. We demonstrate that general-purpose word embeddings produces more informative sentence representation than lexicon features. However, combining lexicon features with embeddings yields higher performance than embeddings alone.",
"title": ""
},
{
"docid": "598744a94cbff466c42e6788d5e23a79",
"text": "The energy consumption of DRAM is a critical concern in modern computing systems. Improvements in manufacturing process technology have allowed DRAM vendors to lower the DRAM supply voltage conservatively, which reduces some of the DRAM energy consumption. We would like to reduce the DRAM supply voltage more aggressively, to further reduce energy. Aggressive supply voltage reduction requires a thorough understanding of the effect voltage scaling has on DRAM access latency and DRAM reliability.\n In this paper, we take a comprehensive approach to understanding and exploiting the latency and reliability characteristics of modern DRAM when the supply voltage is lowered below the nominal voltage level specified by DRAM standards. Using an FPGA-based testing platform, we perform an experimental study of 124 real DDR3L (low-voltage) DRAM chips manufactured recently by three major DRAM vendors. We find that reducing the supply voltage below a certain point introduces bit errors in the data, and we comprehensively characterize the behavior of these errors. We discover that these errors can be avoided by increasing the latency of three major DRAM operations (activation, restoration, and precharge). We perform detailed DRAM circuit simulations to validate and explain our experimental findings. We also characterize the various relationships between reduced supply voltage and error locations, stored data patterns, DRAM temperature, and data retention.\n Based on our observations, we propose a new DRAM energy reduction mechanism, called Voltron. The key idea of Voltron is to use a performance model to determine by how much we can reduce the supply voltage without introducing errors and without exceeding a user-specified threshold for performance loss. Our evaluations show that Voltron reduces the average DRAM and system energy consumption by 10.5% and 7.3%, respectively, while limiting the average system performance loss to only 1.8%, for a variety of memory-intensive quad-core workloads. We also show that Voltron significantly outperforms prior dynamic voltage and frequency scaling mechanisms for DRAM.",
"title": ""
}
] | scidocsrr |
f3bfa7411abe00a52553040db8c634d8 | A novel Bayesian network-based fault prognostic method for semiconductor manufacturing process | [
{
"docid": "1874bd466665e39dbb4bd28b2b0f0d6e",
"text": "Pattern recognition encompasses two fundamental tasks: description and classification. Given an object to analyze, a pattern recognition system first generates a description of it (i.e., the pattern) and then classifies the object based on that description (i.e., the recognition). Two general approaches for implementing pattern recognition systems, statistical and structural, employ different techniques for description and classification. Statistical approaches to pattern recognition use decision-theoretic concepts to discriminate among objects belonging to different groups based upon their quantitative features. Structural approaches to pattern recognition use syntactic grammars to discriminate among objects belonging to different groups based upon the arrangement of their morphological (i.e., shape-based or structural) features. Hybrid approaches to pattern recognition combine aspects of both statistical and structural pattern recognition. Structural pattern recognition systems are difficult to apply to new domains because implementation of both the description and classification tasks requires domain knowledge. Knowledge acquisition techniques necessary to obtain domain knowledge from experts are tedious and often fail to produce a complete and accurate knowledge base. Consequently, applications of structural pattern recognition have been primarily restricted to domains in which the set of useful morphological features has been established in the literature (e.g., speech recognition and character recognition) and the syntactic grammars can be composed by hand (e.g., electrocardiogram diagnosis). To overcome this limitation, a domain-independent approach to structural pattern recognition is needed that is capable of extracting morphological features and performing classification without relying on domain knowledge. A hybrid system that employs a statistical classification technique to perform discrimination based on structural features is a natural solution. While a statistical classifier is inherently domain independent, the domain knowledge necessary to support the description task can be eliminated with a set of generally-useful morphological features. Such a set of morphological features is suggested as the foundation for the development of a suite of structure detectors to perform generalized feature extraction for structural pattern recognition in time-series data. The ability of the suite of structure detectors to generate features useful for structural pattern recognition is evaluated by comparing the classification accuracies achieved when using the structure detectors versus commonly-used statistical feature extractors. Two real-world databases with markedly different characteristics and established ground truth serve as sources of data for the evaluation. The classification accuracies achieved using the features extracted by the structure detectors were consistently as good as or better than the classification accuracies achieved when using the features generated by the statistical feature extractors, thus demonstrating that the suite of structure detectors effectively performs generalized feature extraction for structural pattern recognition in time-series data.",
"title": ""
}
] | [
{
"docid": "0fd4b7ed6e3c67fb9d4bb70e83d8796c",
"text": "The biological properties of dietary polyphenols are greatly dependent on their bioavailability that, in turn, is largely influenced by their degree of polymerization. The gut microbiota play a key role in modulating the production, bioavailability and, thus, the biological activities of phenolic metabolites, particularly after the intake of food containing high-molecular-weight polyphenols. In addition, evidence is emerging on the activity of dietary polyphenols on the modulation of the colonic microbial population composition or activity. However, although the great range of health-promoting activities of dietary polyphenols has been widely investigated, their effect on the modulation of the gut ecology and the two-way relationship \"polyphenols ↔ microbiota\" are still poorly understood. Only a few studies have examined the impact of dietary polyphenols on the human gut microbiota, and most were focused on single polyphenol molecules and selected bacterial populations. This review focuses on the reciprocal interactions between the gut microbiota and polyphenols, the mechanisms of action and the consequences of these interactions on human health.",
"title": ""
},
{
"docid": "dc7f68a286fcf0ebc36bc02b80b5b6bd",
"text": "Many studies of digital communication, in particular of Twitter, use natural language processing (NLP) to find topics, assess sentiment, and describe user behaviour. In finding topics often the relationships between users who participate in the topic are neglected. We propose a novel method of describing and classifying online conversations using only the structure of the underlying temporal network and not the content of individual messages. This method utilises all available information in the temporal network (no aggregation), combining both topological and temporal structure using temporal motifs and inter-event times. This allows us create an embedding of the temporal network in order to describe the behaviour of individuals and collectives over time and examine the structure of conversation over multiple timescales.",
"title": ""
},
{
"docid": "85b1fe5c3d6d68791345d32eda99055b",
"text": "Surgery and other invasive therapies are complex interventions, the assessment of which is challenged by factors that depend on operator, team, and setting, such as learning curves, quality variations, and perception of equipoise. We propose recommendations for the assessment of surgery based on a five-stage description of the surgical development process. We also encourage the widespread use of prospective databases and registries. Reports of new techniques should be registered as a professional duty, anonymously if necessary when outcomes are adverse. Case series studies should be replaced by prospective development studies for early technical modifications and by prospective research databases for later pre-trial evaluation. Protocols for these studies should be registered publicly. Statistical process control techniques can be useful in both early and late assessment. Randomised trials should be used whenever possible to investigate efficacy, but adequate pre-trial data are essential to allow power calculations, clarify the definition and indications of the intervention, and develop quality measures. Difficulties in doing randomised clinical trials should be addressed by measures to evaluate learning curves and alleviate equipoise problems. Alternative prospective designs, such as interrupted time series studies, should be used when randomised trials are not feasible. Established procedures should be monitored with prospective databases to analyse outcome variations and to identify late and rare events. Achievement of improved design, conduct, and reporting of surgical research will need concerted action by editors, funders of health care and research, regulatory bodies, and professional societies.",
"title": ""
},
{
"docid": "20d1cb8d2f416c1dc07e5a34c2ec43ba",
"text": "Significant research and development of algorithms in intelligent transportation has grabbed more attention in recent years. An automated, fast, accurate and robust vehicle plate recognition system has become need for traffic control and law enforcement of traffic regulations; and the solution is ANPR. This paper is dedicated on an improved technique of OCR based license plate recognition using neural network trained dataset of object features. A blended algorithm for recognition of license plate is proposed and is compared with existing methods for improve accuracy. The whole system can be categorized under three major modules, namely License Plate Localization, Plate Character Segmentation, and Plate Character Recognition. The system is simulated on 300 national and international motor vehicle LP images and results obtained justifies the main requirement.",
"title": ""
},
{
"docid": "f59b0d27ace06c3bb2a54065de1a2243",
"text": "Background: A considerable amount of research has been done to explore the key factors that affect learning a second language. Among these factors are students' learning strategies, motivation, attitude, learning environment, and the age at which they are exposed to a second language. Such issues have not been explored extensively in Saudi Arabia, even though English is used as the medium of teaching and learning for medical studies. Objectives: First, to explore the learning strategies used to study English as a second language. Second, to identify students' motivations for studying English. Third, to assess students' perceptions toward their learning environment. Fourth, to investigate students' attitude towards the speakers of English. Fifth, to explore any possible relationships among English language proficiency grades of students and the following: demographic variables, grades for their general medical courses, learning strategies, motivational variables, attitudes, and environmental variables. It is also the aim of this study to explore the relationships between English language learning strategies and motivational variables. Methods: A cross-sectional descriptive study was conducted in May, 2008. The Attitudinal Measure of Learners of English as a Second Language (AMLESL) questionnaire was used to explore the learning strategies used by students to study English as a second language, their motivation to study English, their attitude toward English speaking people, and perceptions toward the environment where the learning is taking place. Results: A total of 110 out of 120 questionnaires were completed by Applied Medical Science undergraduates, yielding a response rate of 92%. Students utilize all types of learning strategies. Students were motivated 'integratively' and 'instrumentally'. There were significant correlations between the achievement in English and performance in general medical courses, learning strategies, motivation, age, and the formal level at which the student started to learn English. Conclusion: The study showed that students utilize all types of language learning strategies. However, cognitive strategies were the most frequently utilized. Students considered their learning environment as more positive than negative. Students were happy with their teacher, and with their English courses. Students held a positive attitude toward English speaking people. Achievement in English was associated positively with performance in the general medical courses, motivation, and social learning strategies. Relationship between English Language, Learning Strategies, Attitudes, Motivation, and Students’ Academic Achievement",
"title": ""
},
{
"docid": "ac24229e51822e44cb09baaf44e9623e",
"text": "Detecting representative frames in videos based on human actions is quite challenging because of the combined factors of human pose in action and the background. This paper addresses this problem and formulates the key frame detection as one of finding the video frames that optimally maximally contribute to differentiating the underlying action category from all other categories. To this end, we introduce a deep two-stream ConvNet for key frame detection in videos that learns to directly predict the location of key frames. Our key idea is to automatically generate labeled data for the CNN learning using a supervised linear discriminant method. While the training data is generated taking many different human action videos into account, the trained CNN can predict the importance of frames from a single video. We specify a new ConvNet framework, consisting of a summarizer and discriminator. The summarizer is a two-stream ConvNet aimed at, first, capturing the appearance and motion features of video frames, and then encoding the obtained appearance and motion features for video representation. The discriminator is a fitting function aimed at distinguishing between the key frames and others in the video. We conduct experiments on a challenging human action dataset UCF101 and show that our method can detect key frames with high accuracy.",
"title": ""
},
{
"docid": "1cfbaddbf6500967956e135318f6a9c8",
"text": "Article history: Received 4 February 2009 Received in revised form 14 April 2010 Accepted 15 June 2010 Available online xxxx",
"title": ""
},
{
"docid": "5aeb8a7daa383259340ac7e27113f783",
"text": "This paper reports on the design, implementation and characterization of wafer-level packaging technology for a wide range of microelectromechanical system (MEMS) devices. The encapsulation technique is based on thermal decomposition of a sacrificial polymer through a polymer overcoat to form a released thin-film organic membrane with scalable height on top of the active part of the MEMS. Hermiticity and vacuum operation are obtained by thin-film deposition of a metal such as chromium, aluminum or gold. The thickness of the overcoat can be optimized according to the size of the device and differential pressure to package a wide variety of MEMS such as resonators, accelerometers and gyroscopes. The key performance metrics of several batches of packaged devices do not degrade as a result of residues from the sacrificial polymer. A Q factor of 5000 at a resonant frequency of 2.5 MHz for the packaged resonator, and a static sensitivity of 2 pF g−1 for the packaged accelerometer were obtained. Cavities as small as 0.000 15 mm3 for the resonator and as large as 1 mm3 for the accelerometer have been made by this method. (Some figures in this article are in colour only in the electronic version)",
"title": ""
},
{
"docid": "19256f0de34e0a0b65c41754230643a0",
"text": "As interest in cryptocurrency has increased, problems have arisen with Proof-of-Work (PoW) and Proof-of-Stake (PoS) methods, the most representative methods of acquiring cryptocurrency in a blockchain. The PoW method is uneconomical and the PoS method can be easily monopolized by a few people. To cope with this issue, this paper introduces a Proof-of-Probability (PoP) method. The PoP is a method where each node sorts the encrypted actual hash as well as a number of fake hash, and then the first node to decrypt actual hash creates block. In addition, a wait time is used when decrypting one hash and then decrypting the next hash for restricting the excessive computing power competition. In addition, the centralization by validaters with many stakes can be avoided in the proposed PoP method.",
"title": ""
},
{
"docid": "fe0863027b80e28fe8c20cff5781a547",
"text": "We describe the implementation and use of a reverse compiler from Analog Devices 21xx assembler source to ANSI-C (with optional use of the language extensions for the TMS320C6x processors) which has been used to port substantial applications. The main results of this work are that reverse compilation is feasible and that some of the features that make small DSP's hard to compile for actually assist the process of reverse compilation compared to that of a general purpose processor. We present statistics on the occurrence of non-statically visible features of hand-written assembler code and look at the quality of the code generated by an optimising ANSI-C compiler from our reverse compiled source and compare it to code generated from conventionally authored ANSI-C programs.",
"title": ""
},
{
"docid": "2966dd1e2cd26b7c956d296ef6eb501e",
"text": "Information extraction from microblog posts is an important task, as today microblogs capture an unprecedented amount of information and provide a view into the pulse of the world. As the core component of information extraction, we consider the task of Twitter entity linking in this paper. In the current entity linking literature, mention detection and entity disambiguation are frequently cast as equally important but distinct problems. However, in our task, we find that mention detection is often the performance bottleneck. The reason is that messages on micro-blogs are short, noisy and informal texts with little context, and often contain phrases with ambiguous meanings. To rigorously address the Twitter entity linking problem, we propose a structural SVM algorithm for entity linking that jointly optimizes mention detection and entity disambiguation as a single end-to-end task. By combining structural learning and a variety of firstorder, second-order, and context-sensitive features, our system is able to outperform existing state-of-the art entity linking systems by 15% F1.",
"title": ""
},
{
"docid": "38b1a88b57d2834129a59ac235d6b414",
"text": "Historically, social scientists have sought out explanations of human and social phenomena that provide interpretable causal mechanisms, while often ignoring their predictive accuracy. We argue that the increasingly computational nature of social science is beginning to reverse this traditional bias against prediction; however, it has also highlighted three important issues that require resolution. First, current practices for evaluating predictions must be better standardized. Second, theoretical limits to predictive accuracy in complex social systems must be better characterized, thereby setting expectations for what can be predicted or explained. Third, predictive accuracy and interpretability must be recognized as complements, not substitutes, when evaluating explanations. Resolving these three issues will lead to better, more replicable, and more useful social science.",
"title": ""
},
{
"docid": "603a4d4037ce9fc653d46473f9085d67",
"text": "In different applications like Complex document image processing, Advertisement and Intelligent transportation logo recognition is an important issue. Logo Recognition is an essential sub process although there are many approaches to study logos in these fields. In this paper a robust method for recognition of a logo is proposed, which involves K-nearest neighbors distance classifier and Support Vector Machine classifier to evaluate the similarity between images under test and trained images. For test images eight set of logo image with a rotation angle of 0°, 45°, 90°, 135°, 180°, 225°, 270°, and 315° are considered. A Dual Tree Complex Wavelet Transform features were used for determining features. Final result is obtained by measuring the similarity obtained from the feature vectors of the trained image and image under test. Total of 31 classes of logo images of different organizations are considered for experimental results. An accuracy of 87.49% is obtained using KNN classifier and 92.33% from SVM classifier.",
"title": ""
},
{
"docid": "1804ba10a62f81302f2701cfe0330783",
"text": "We describe a web browser fingerprinting technique based on measuring the onscreen dimensions of font glyphs. Font rendering in web browsers is affected by many factors—browser version, what fonts are installed, and hinting and antialiasing settings, to name a few— that are sources of fingerprintable variation in end-user systems. We show that even the relatively crude tool of measuring glyph bounding boxes can yield a strong fingerprint, and is a threat to users’ privacy. Through a user experiment involving over 1,000 web browsers and an exhaustive survey of the allocated space of Unicode, we find that font metrics are more diverse than User-Agent strings, uniquely identifying 34% of participants, and putting others into smaller anonymity sets. Fingerprinting is easy and takes only milliseconds. We show that of the over 125,000 code points examined, it suffices to test only 43 in order to account for all the variation seen in our experiment. Font metrics, being orthogonal to many other fingerprinting techniques, can augment and sharpen those other techniques. We seek ways for privacy-oriented web browsers to reduce the effectiveness of font metric–based fingerprinting, without unduly harming usability. As part of the same user experiment of 1,000 web browsers, we find that whitelisting a set of standard font files has the potential to more than quadruple the size of anonymity sets on average, and reduce the fraction of users with a unique font fingerprint below 10%. We discuss other potential countermeasures.",
"title": ""
},
{
"docid": "22be2a234b9211cefc713be861862d82",
"text": "BACKGROUND\nA new framework for heart sound analysis is proposed. One of the most difficult processes in heart sound analysis is segmentation, due to interference form murmurs.\n\n\nMETHOD\nEqual number of cardiac cycles were extracted from heart sounds with different heart rates using information from envelopes of autocorrelation functions without the need to label individual fundamental heart sounds (FHS). The complete method consists of envelope detection, calculation of cardiac cycle lengths using auto-correlation of envelope signals, features extraction using discrete wavelet transform, principal component analysis, and classification using neural network bagging predictors.\n\n\nRESULT\nThe proposed method was tested on a set of heart sounds obtained from several on-line databases and recorded with an electronic stethoscope. Geometric mean was used as performance index. Average classification performance using ten-fold cross-validation was 0.92 for noise free case, 0.90 under white noise with 10 dB signal-to-noise ratio (SNR), and 0.90 under impulse noise up to 0.3 s duration.\n\n\nCONCLUSION\nThe proposed method showed promising results and high noise robustness to a wide range of heart sounds. However, more tests are needed to address any bias that may have been introduced by different sources of heart sounds in the current training set, and to concretely validate the method. Further work include building a new training set recorded from actual patients, then further evaluate the method based on this new training set.",
"title": ""
},
{
"docid": "1c365e6256ae1c404c6f3f145eb04924",
"text": "Progress in signal processing continues to enable welcome advances in high-frequency (HF) radio performance and efficiency. The latest data waveforms use channels wider than 3 kHz to boost data throughput and robustness. This has driven the need for a more capable Automatic Link Establishment (ALE) system that links faster and adapts the wideband HF (WBHF) waveform to efficiently use available spectrum. In this paper, we investigate the possibility and advantages of using various non-scanning ALE techniques with the new wideband ALE (WALE) to further improve spectrum awareness and linking speed.",
"title": ""
},
{
"docid": "0a7f93e98e1d256ea6a4400f33753d6a",
"text": "In this paper, we investigate safe and efficient map-building strategies for a mobile robot with imperfect control and sensing. In the implementation, a robot equipped with a range sensor builds a polygonal map (layout) of a previously unknown indoor environment. The robot explores the environment and builds the map concurrently by patching together the local models acquired by the sensor into a global map. A well-studied and related problem is the simultaneous localization and mapping (SLAM) problem, where the goal is to integrate the information collected during navigation into the most accurate map possible. However, SLAM does not address the sensorplacement portion of the map-building task. That is, given the map built so far, where should the robot go next? This is the main question addressed in this paper. Concretely, an algorithm is proposed to guide the robot through a series of “good” positions, where “good” refers to the expected amount and quality of the information that will be revealed at each new location. This is similar to the nextbest-view (NBV) problem studied in computer vision and graphics. However, in mobile robotics the problem is complicated by several issues, two of which are particularly crucial. One is to achieve safe navigation despite an incomplete knowledge of the environment and sensor limitations (e.g., in range and incidence). The other issue is the need to ensure sufficient overlap between each new local model and the current map, in order to allow registration of successive views under positioning uncertainties inherent to mobile robots. To address both issues in a coherent framework, in this paper we introduce the concept of a safe region, defined as the largest region that is guaranteed to be free of obstacles given the sensor readings made so far. The construction of a safe region takes sensor limitations into account. In this paper we also describe an NBV algorithm that uses the safe-region concept to select the next robot position at each step. The International Journal of Robotics Research Vol. 21, No. 10–11, October-November 2002, pp. 829-848, ©2002 Sage Publications The new position is chosen within the safe region in order to maximize the expected gain of information under the constraint that the local model at this new position must have a minimal overlap with the current global map. In the future, NBV and SLAM algorithms should reinforce each other. While a SLAM algorithm builds a map by making the best use of the available sensory data, an NBV algorithm, such as that proposed here, guides the navigation of the robot through positions selected to provide the best sensory inputs. KEY WORDS—next-best view, safe region, online exploration, incidence constraints, map building",
"title": ""
},
{
"docid": "252526e9d50cab28d702f695c12acc27",
"text": "This paper describes several optimization techniques used to create an adequate route network graph for autonomous cars as a map reference for driving on German autobahn or similar highway tracks. We have taken the Route Network Definition File Format (RNDF) specified by DARPA and identified multiple flaws of the RNDF for creating digital maps for autonomous vehicles. Thus, we introduce various enhancements to it to form a digital map graph called RND-FGraph, which is well suited to map almost any urban transportation infrastructure. We will also outline and show results of fast optimizations to reduce the graph size. The RNDFGraph has been used for path-planning and trajectory evaluation by the behavior module of our two autonomous cars “Spirit of Berlin” and “MadeInGermany”. We have especially tuned the graph to map structured high speed environments such as autobahns where we have tested autonomously hundreds of kilometers under real traffic conditions.",
"title": ""
},
{
"docid": "d8170e82fcfb0da85ad2f3d7bed4161e",
"text": "In this paper, a new task scheduling algorithm called RASA, considering the distribution and scalability characteristics of grid resources, is proposed. The algorithm is built through a comprehensive study and analysis of two well known task scheduling algorithms, Min-min and Max-min. RASA uses the advantages of the both algorithms and covers their disadvantages. To achieve this, RASA firstly estimates the completion time of the tasks on each of the available grid resources and then applies the Max-min and Min-min algorithms, alternatively. In this respect, RASA uses the Min-min strategy to execute small tasks before the large ones and applies the Max-min strategy to avoid delays in the execution of large tasks and to support concurrency in the execution of large and small tasks. Our experimental results of applying RASA on scheduling independent tasks within grid environments demonstrate the applicability of RASA in achieving schedules with comparatively lower makespan.",
"title": ""
},
{
"docid": "3f79f0eee8878fd43187e9d48531a221",
"text": "In this paper, the design and development of a portable classroom attendance system based on fingerprint biometric is presented. Among the salient aims of implementing a biometric feature into a portable attendance system is security and portability. The circuit of this device is strategically constructed to have an independent source of energy to be operated, as well as its miniature design which made it more efficient in term of its portable capability. Rather than recording the attendance in writing or queuing in front of class equipped with fixed fingerprint or smart card reader. This paper introduces a portable fingerprint based biometric attendance system which addresses the weaknesses of the existing paper based attendance method or long time queuing. In addition, our biometric fingerprint based system is encrypted which preserves data integrity.",
"title": ""
}
] | scidocsrr |
a1a4c99e02f541e789f8618ca65b41f3 | Double Embeddings and CNN-based Sequence Labeling for Aspect Extraction | [
{
"docid": "e3d212f67713f6a902fe0f3eb468eddf",
"text": "We propose a novel LSTM-based deep multi-task learning framework for aspect term extraction from user review sentences. Two LSTMs equipped with extended memories and neural memory operations are designed for jointly handling the extraction tasks of aspects and opinions via memory interactions. Sentimental sentence constraint is also added for more accurate prediction via another LSTM. Experiment results over two benchmark datasets demonstrate the effectiveness of our framework.",
"title": ""
}
] | [
{
"docid": "9ebdf3493d6a80d12c97348a2d203d3e",
"text": "Agile software development methodologies have been greeted with enthusiasm by many software developers, yet their widespread adoption has also resulted in closer examination of their strengths and weaknesses. While analyses and evaluations abound, the need still remains for an objective and systematic appraisal of Agile processes specifically aimed at defining strategies for their improvement. We provide a review of the strengths and weaknesses identified in Agile processes, based on which a strengths- weaknesses-opportunities-threats (SWOT) analysis of the processes is performed. We suggest this type of analysis as a useful tool for highlighting and addressing the problem issues in Agile processes, since the results can be used as improvement strategies.",
"title": ""
},
{
"docid": "b5097e718754c02cddd02a1c147c6398",
"text": "Semi-automatic parking system is a driver convenience system automating steering control required during parking operation. This paper proposes novel monocular-vision based target parking-slot recognition by recognizing parking-slot markings when driver designates a seed-point inside the target parking-slot with touch screen. Proposed method compensates the distortion of fisheye lens and constructs a bird’s eye view image using homography. Because adjacent vehicles are projected along the outward direction from camera in the bird’s eye view image, if marking line-segment distinguishing parking-slots from roadway and front-ends of marking linesegments dividing parking-slots are observed, proposed method successfully recognizes the target parking-slot marking. Directional intensity gradient, utilizing the width of marking line-segment and the direction of seed-point with respect to camera position as a prior knowledge, can detect marking linesegments irrespective of noise and illumination variation. Making efficient use of the structure of parking-slot markings in the bird’s eye view image, proposed method simply recognizes the target parking-slot marking. It is validated by experiments that proposed method can successfully recognize target parkingslot under various situations and illumination conditions.",
"title": ""
},
{
"docid": "8107b3dc36d240921571edfc778107ff",
"text": "FinFET devices have been proposed as a promising substitute for conventional bulk CMOS-based devices at the nanoscale due to their extraordinary properties such as improved channel controllability, a high on/off current ratio, reduced short-channel effects, and relative immunity to gate line-edge roughness. This brief builds standard cell libraries for the advanced 7-nm FinFET technology, supporting multiple threshold voltages and supply voltages. The circuit synthesis results of various combinational and sequential circuits based on the presented 7-nm FinFET standard cell libraries forecast 10× and 1000× energy reductions on average in a superthreshold regime and 16× and 3000× energy reductions on average in a near-threshold regime as compared with the results of the 14-nm and 45-nm bulk CMOS technology nodes, respectively.",
"title": ""
},
{
"docid": "ef65f603b9f0441378e53ec7cabf7940",
"text": "Event extraction has been well studied for more than two decades, through both the lens of document-level and sentence-level event extraction. However, event extraction methods to date do not yet offer a satisfactory solution to providing concise, structured, document-level summaries of events in news articles. Prior work on document-level event extraction methods have focused on highly specific domains, often with great reliance on handcrafted rules. Such approaches do not generalize well to new domains. In contrast, sentence-level event extraction methods have applied to a much wider variety of domains, but generate output at such fine-grained details that they cannot offer good document-level summaries of events. In this thesis, we propose a new framework for extracting document-level event summaries called macro-events, unifying together aspects of both information extraction and text summarization. The goal of this work is to extract concise, structured representations of documents that can clearly outline the main event of interest and all the necessary argument fillers to describe the event. Unlike work in abstractive and extractive summarization, we seek to create template-based, structured summaries, rather than plain text summaries. We propose three novel methods to address the macro-event extraction task. First, we introduce a structured prediction model based on the Learning to Search framework for jointly learning argument fillers both across and within event argument slots. Second, we propose a multi-layer neural network that is trained directly on macro-event annotated data. Finally, we propose a deep learning method that treats the problem as machine comprehension, which does not require training with any on-domain macro-event labeled data. Our experimental results on a variety of domains show that such algorithms can achieve stronger performance on this task compared to existing baseline approaches. On average across all datasets, neural networks can achieve a 1.76% and 3.96% improvement on micro-averaged and macro-averaged F1 respectively over baseline approaches, while Learning to Search achieves a 3.87% and 5.10% improvement over baseline approaches on the same metrics. Furthermore, under scenarios of limited training data, we find that machine comprehension models can offer very strong performance compared to directly supervised algorithms, while requiring very little human effort to adapt to new domains.",
"title": ""
},
{
"docid": "f20e0b50b72b4b2796b77757ff20210e",
"text": "The dominant neural architectures in question answer retrieval are based on recurrent or convolutional encoders configured with complex word matching layers. Given that recent architectural innovations are mostly new word interaction layers or attention-based matching mechanisms, it seems to be a well-established fact that these components are mandatory for good performance. Unfortunately, the memory and computation cost incurred by these complex mechanisms are undesirable for practical applications. As such, this paper tackles the question of whether it is possible to achieve competitive performance with simple neural architectures. We propose a simple but novel deep learning architecture for fast and efficient question-answer ranking and retrieval. More specifically, our proposed model, HyperQA, is a parameter efficient neural network that outperforms other parameter intensive models such as Attentive Pooling BiLSTMs and Multi-Perspective CNNs on multiple QA benchmarks. The novelty behind HyperQA is a pairwise ranking objective that models the relationship between question and answer embeddings in Hyperbolic space instead of Euclidean space. This empowers our model with a self-organizing ability and enables automatic discovery of latent hierarchies while learning embeddings of questions and answers. Our model requires no feature engineering, no similarity matrix matching, no complicated attention mechanisms nor over-parameterized layers and yet outperforms and remains competitive to many models that have these functionalities on multiple benchmarks.",
"title": ""
},
{
"docid": "17c9a72c46f63a7121ea9c9b6b893a2f",
"text": "This paper presents the artificial neural network approach namely Back propagation network (BPNs) and probabilistic neural network (PNN). It is used to classify the type of tumor in MRI images of different patients with Astrocytoma type of brain tumor. The image processing techniques have been developed for detection of the tumor in the MRI images. Gray Level Co-occurrence Matrix (GLCM) is used to achieve the feature extraction. The whole system worked in two modes firstly Training/Learning mode and secondly Testing/Recognition mode.",
"title": ""
},
{
"docid": "e724d4405f50fd74a2184187dcc52401",
"text": "This paper presents security of Internet of things. In the Internet of Things vision, every physical object has a virtual component that can produce and consume services Such extreme interconnection will bring unprecedented convenience and economy, but it will also require novel approaches to ensure its safe and ethical use. The Internet and its users are already under continual attack, and a growing economy-replete with business models that undermine the Internet's ethical use-is fully focused on exploiting the current version's foundational weaknesses.",
"title": ""
},
{
"docid": "29ec723fb3f26290f43af77210ca5022",
"text": "—Social media and Social Network Analysis (SNA) acquired a huge popularity and represent one of the most important social and computer science phenomena of recent years. One of the most studied problems in this research area is influence and information propagation. The aim of this paper is to analyze the information diffusion process and predict the influence (represented by the rate of infected nodes at the end of the diffusion process) of an initial set of nodes in two networks: Flickr user's contacts and YouTube videos users commenting these videos. These networks are dissimilar in their structure (size, type, diameter, density, components), and the type of the relationships (explicit relationship represented by the contacts links, and implicit relationship created by commenting on videos), they are extracted using NodeXL tool. Three models are used for modeling the dissemination process: Linear Threshold Model (LTM), Independent Cascade Model (ICM) and an extension of this last called Weighted Cascade Model (WCM). Networks metrics and visualization were manipulated by NodeXL as well. Experiments results show that the structure of the network affect the diffusion process directly. Unlike results given in the blog world networks, the information can spread farther through explicit connections than through implicit relations.",
"title": ""
},
{
"docid": "b10447097f8d513795b4f4e08e1838d8",
"text": "We introduce a new deep learning model for semantic role labeling (SRL) that significantly improves the state of the art, along with detailed analyses to reveal its strengths and limitations. We use a deep highway BiLSTM architecture with constrained decoding, while observing a number of recent best practices for initialization and regularization. Our 8-layer ensemble model achieves 83.2 F1 on the CoNLL 2005 test set and 83.4 F1 on CoNLL 2012, roughly a 10% relative error reduction over the previous state of the art. Extensive empirical analysis of these gains show that (1) deep models excel at recovering long-distance dependencies but can still make surprisingly obvious errors, and (2) that there is still room for syntactic parsers to improve these results.",
"title": ""
},
{
"docid": "c0546dabfcd377af78ae65a6e0a6a255",
"text": "A hard real-time system is usually subject to stringent reliability and timing constraints since failure to produce correct results in a timely manner may lead to a disaster. One way to avoid missing deadlines is to trade the quality of computation results for timeliness, and software fault-tolerance is often achieved with the use of redundant programs. A deadline mechanism which combines these two methods is proposed to provide software faulttolerance in hard real-time periodic task systems. Specifically, we consider the problem of scheduling a set of realtime periodic tasks each of which has two versions:primary and alternate. The primary version contains more functions (thus more complex) and produces good quality results but its correctness is more difficult to verify because of its high level of complexity and resource usage. By contrast, the alternate version contains only the minimum required functions (thus simpler) and produces less precise but acceptable results, and its correctness is easy to verify. We propose a scheduling algorithm which (i) guarantees either the primary or alternate version of each critical task to be completed in time and (ii) attempts to complete as many primaries as possible. Our basic algorithm uses a fixed priority-driven preemptive scheduling scheme to pre-allocate time intervals to the alternates, and at run-time, attempts to execute primaries first. An alternate will be executed only (1) if its primary fails due to lack of time or manifestation of bugs, or (2) when the latest time to start execution of the alternate without missing the corresponding task deadline is reached. This algorithm is shown to be effective and easy to implement. This algorithm is enhanced further to prevent early failures in executing primaries from triggering failures in the subsequent job executions, thus improving efficiency of processor usage.",
"title": ""
},
{
"docid": "f69f8b58e926a8a4573dd650ee29f80b",
"text": "Zab is a crash-recovery atomic broadcast algorithm we designed for the ZooKeeper coordination service. ZooKeeper implements a primary-backup scheme in which a primary process executes clients operations and uses Zab to propagate the corresponding incremental state changes to backup processes1. Due the dependence of an incremental state change on the sequence of changes previously generated, Zab must guarantee that if it delivers a given state change, then all other changes it depends upon must be delivered first. Since primaries may crash, Zab must satisfy this requirement despite crashes of primaries.",
"title": ""
},
{
"docid": "8ae257994c6f412ceb843fcb98a67043",
"text": "Discovering the author's interest over time from documents has important applications in recommendation systems, authorship identification and opinion extraction. In this paper, we propose an interest drift model (IDM), which monitors the evolution of author interests in time-stamped documents. The model further uses the discovered author interest information to help finding better topics. Unlike traditional topic models, our model is sensitive to the ordering of words, thus it extracts more information from the semantic meaning of the context. The experiment results show that the IDM model learns better topics than state-of-the-art topic models.",
"title": ""
},
{
"docid": "95d767d1b9a2ba2aecdf26443b3dd4af",
"text": "Advanced sensing and measurement techniques are key technologies to realize a smart grid. The giant magnetoresistance (GMR) effect has revolutionized the fields of data storage and magnetic measurement. In this work, a design of a GMR current sensor based on a commercial analog GMR chip for applications in a smart grid is presented and discussed. Static, dynamic and thermal properties of the sensor were characterized. The characterizations showed that in the operation range from 0 to ±5 A, the sensor had a sensitivity of 28 mV·A(-1), linearity of 99.97%, maximum deviation of 2.717%, frequency response of −1.5 dB at 10 kHz current measurement, and maximum change of the amplitude response of 0.0335%·°C(-1) with thermal compensation. In the distributed real-time measurement and monitoring of a smart grid system, the GMR current sensor shows excellent performance and is cost effective, making it suitable for applications such as steady-state and transient-state monitoring. With the advantages of having a high sensitivity, high linearity, small volume, low cost, and simple structure, the GMR current sensor is promising for the measurement and monitoring of smart grids.",
"title": ""
},
{
"docid": "4c5b74544b1452ffe0004733dbeee109",
"text": "Literary genres are commonly viewed as being defined in terms of content and style. In this paper, we focus on one particular type of content feature, namely lexical expressions of emotion, and investigate the hypothesis that emotion-related information correlates with particular genres. Using genre classification as a testbed, we compare a model that computes lexiconbased emotion scores globally for complete stories with a model that tracks emotion arcs through stories on a subset of Project Gutenberg with five genres. Our main findings are: (a), the global emotion model is competitive with a largevocabulary bag-of-words genre classifier (80 % F1); (b), the emotion arc model shows a lower performance (59 % F1) but shows complementary behavior to the global model, as indicated by a very good performance of an oracle model (94 % F1) and an improved performance of an ensemble model (84 % F1); (c), genres differ in the extent to which stories follow the same emotional arcs, with particularly uniform behavior for anger (mystery) and fear (adventures, romance, humor, science fiction).",
"title": ""
},
{
"docid": "ce55485a60213c7656eb804b89be36cc",
"text": "In a previous article, we presented a systematic computational study of the extraction of semantic representations from the word-word co-occurrence statistics of large text corpora. The conclusion was that semantic vectors of pointwise mutual information values from very small co-occurrence windows, together with a cosine distance measure, consistently resulted in the best representations across a range of psychologically relevant semantic tasks. This article extends that study by investigating the use of three further factors--namely, the application of stop-lists, word stemming, and dimensionality reduction using singular value decomposition (SVD)--that have been used to provide improved performance elsewhere. It also introduces an additional semantic task and explores the advantages of using a much larger corpus. This leads to the discovery and analysis of improved SVD-based methods for generating semantic representations (that provide new state-of-the-art performance on a standard TOEFL task) and the identification and discussion of problems and misleading results that can arise without a full systematic study.",
"title": ""
},
{
"docid": "e349ca11637dfad2d68a5082e27f11ff",
"text": "As the capabilities of artificial intelligence (AI) systems improve, it becomes important to constrain their actions to ensure their behaviour remains beneficial to humanity. A variety of ethical, legal and safety-based frameworks have been proposed as a basis for designing these constraints. Despite their variations, these frameworks share the common characteristic that decision-making must consider multiple potentially conflicting factors. We demonstrate that these alignment frameworks can be represented as utility functions, but that the widely used Maximum Expected Utility (MEU) paradigm provides insufficient support for such multiobjective decision-making. We show that a Multiobjective Maximum Expected Utility paradigm based on the combination of vector utilities and non-linear action–selection can overcome many of the issues which limit MEU’s effectiveness in implementing aligned AI. We examine existing approaches to multiobjective AI, and identify how these can contribute to the development of human-aligned intelligent agents.",
"title": ""
},
{
"docid": "77bbd6d3e1f1ae64bda32cd057cf0580",
"text": "Although great progress has been made in automatic speech recognition, significant performance degradation still exists in noisy environments. Recently, very deep convolutional neural networks CNNs have been successfully applied to computer vision and speech recognition tasks. Based on our previous work on very deep CNNs, in this paper this architecture is further developed to improve recognition accuracy for noise robust speech recognition. In the proposed very deep CNN architecture, we study the best configuration for the sizes of filters, pooling, and input feature maps: the sizes of filters and poolings are reduced and dimensions of input features are extended to allow for adding more convolutional layers. Then the appropriate pooling, padding, and input feature map selection strategies are investigated and applied to the very deep CNN to make it more robust for speech recognition. In addition, an in-depth analysis of the architecture reveals key characteristics, such as compact model scale, fast convergence speed, and noise robustness. The proposed new model is evaluated on two tasks: Aurora4 task with multiple additive noise types and channel mismatch, and the AMI meeting transcription task with significant reverberation. Experiments on both tasks show that the proposed very deep CNNs can significantly reduce word error rate WER for noise robust speech recognition. The best architecture obtains a 10.0% relative reduction over the traditional CNN on AMI, competitive with the long short-term memory recurrent neural networks LSTM-RNN acoustic model. On Aurora4, even without feature enhancement, model adaptation, and sequence training, it achieves a WER of 8.81%, a 17.0% relative improvement over the LSTM-RNN. To our knowledge, this is the best published result on Aurora4.",
"title": ""
},
{
"docid": "8c60d78e9c4db8a457c7555393089f7c",
"text": "Artificially structured metamaterials have enabled unprecedented flexibility in manipulating electromagnetic waves and producing new functionalities, including the cloak of invisibility based on coordinate transformation. Unlike other cloaking approaches4–6, which are typically limited to subwavelength objects, the transformation method allows the design of cloaking devices that render a macroscopic object invisible. In addition, the design is not sensitive to the object that is being cloaked. The first experimental demonstration of such a cloak at microwave frequencies was recently reported7. We note, however, that that design cannot be implemented for an optical cloak, which is certainly of particular interest because optical frequencies are where the word ‘invisibility’ is conventionally defined. Here we present the design of a non-magnetic cloak operating at optical frequencies. The principle and structure of the proposed cylindrical cloak are analysed, and the general recipe for the implementation of such a device is provided. The coordinate transformation used in the proposed nonmagnetic optical cloak of cylindrical geometry is similar to that in ref. 7, by which a cylindrical region r , b is compressed into a concentric cylindrical shell a , r , b as shown in Fig. 1a. This transformation results in the following requirements for anisotropic permittivity and permeability in the cloaking shell:",
"title": ""
},
{
"docid": "b75a9a52296877783431af9447200747",
"text": "Sentiment analysis has been a major area of interest, for which the existence of highquality resources is crucial. In Arabic, there is a reasonable number of sentiment lexicons but with major deficiencies. The paper presents a large-scale Standard Arabic Sentiment Lexicon (SLSA) that is publicly available for free and avoids the deficiencies in the current resources. SLSA has the highest up-to-date reported coverage. The construction of SLSA is based on linking the lexicon of AraMorph with SentiWordNet along with a few heuristics and powerful back-off. SLSA shows a relative improvement of 37.8% over a state-of-theart lexicon when tested for accuracy. It also outperforms it by an absolute 3.5% of F1-score when tested for sentiment analysis.",
"title": ""
}
] | scidocsrr |
31917eed92437862154233d7239c1af1 | 3D Point Cloud Semantic Modelling: Integrated Framework for Indoor Spaces and Furniture | [
{
"docid": "1dcae3f9b4680725d2c7f5aa1736967c",
"text": "Two less addressed issues of deep reinforcement learning are (1) lack of generalization capability to new goals, and (2) data inefficiency, i.e., the model requires several (and often costly) episodes of trial and error to converge, which makes it impractical to be applied to real-world scenarios. In this paper, we address these two issues and apply our model to target-driven visual navigation. To address the first issue, we propose an actor-critic model whose policy is a function of the goal as well as the current state, which allows better generalization. To address the second issue, we propose the AI2-THOR framework, which provides an environment with high-quality 3D scenes and a physics engine. Our framework enables agents to take actions and interact with objects. Hence, we can collect a huge number of training samples efficiently. We show that our proposed method (1) converges faster than the state-of-the-art deep reinforcement learning methods, (2) generalizes across targets and scenes, (3) generalizes to a real robot scenario with a small amount of fine-tuning (although the model is trained in simulation), (4) is end-to-end trainable and does not need feature engineering, feature matching between frames or 3D reconstruction of the environment.",
"title": ""
}
] | [
{
"docid": "72b25e72706720f71ebd6fe8cf769df5",
"text": "This paper reports our recent result in designing a function for autonomous APs to estimate throughput and delay of its clients in 2.4GHz WiFi channels to support those APs' dynamic channel selection. Our function takes as inputs the traffic volume and strength of signals emitted from nearby interference APs as well as the target AP's traffic volume. By this function, the target AP can estimate throughput and delay of its clients without actually moving to each channel, it is just required to monitor IEEE802.11 MAC frames sent or received by the interference APs. The function is composed of an SVM-based classifier to estimate capacity saturation and a regression function to estimate both throughput and delay in case of saturation in the target channel. The training dataset for the machine learning is created by a highly-precise network simulator. We have conducted over 10,000 simulations to train the model, and evaluated using additional 2,000 simulation results. The result shows that the estimated throughput error is less than 10%.",
"title": ""
},
{
"docid": "b50c010e8606de8efb7a9e861ca31059",
"text": "A Software Defined Network (SDN) is a new network architecture that provides central control over the network. Although central control is the major advantage of SDN, it is also a single point of failure if it is made unreachable by a Distributed Denial of Service (DDoS) Attack. To mitigate this threat, this paper proposes to use the central control of SDN for attack detection and introduces a solution that is effective and lightweight in terms of the resources that it uses. More precisely, this paper shows how DDoS attacks can exhaust controller resources and provides a solution to detect such attacks based on the entropy variation of the destination IP address. This method is able to detect DDoS within the first five hundred packets of the attack traffic.",
"title": ""
},
{
"docid": "bf2746e237446a477919b3d6c2940237",
"text": "In this paper, we first introduce the RF performance of Globalfoundries 45RFSOI process. NFET Ft > 290GHz and Fmax >380GHz. Then we present several mm-Wave circuit block designs, i.e., Switch, Power Amplifier, and LNA, based on 45RFSOI process for 5G Front End Module (FEM) applications. For the SPDT switch, insertion loss (IL) < 1dB at 30GHz with 32dBm P1dB and > 25dBm Pmax. For the PA, with a 2.9V power supply, the PA achieves 13.1dB power gain and a saturated output power (Psat) of 16.2dBm with maximum power-added efficiency (PAE) of 41.5% at 24Ghz continuous-wave (CW). With 960Mb/s 64QAM signal, 22.5% average PAE, −29.6dB EVM, and −30.5dBc ACLR are achieved with 9.5dBm average output power.",
"title": ""
},
{
"docid": "c00a29466c82f972a662b0e41b724928",
"text": "We introduce the type theory ¿µv, a call-by-value variant of Parigot's ¿µ-calculus, as a Curry-Howard representation theory of classical propositional proofs. The associated rewrite system is Church-Rosser and strongly normalizing, and definitional equality of the type theory is consistent, compatible with cut, congruent and decidable. The attendant call-by-value programming language µPCFv is obtained from ¿µv by augmenting it by basic arithmetic, conditionals and fixpoints. We study the behavioural properties of µPCFv and show that, though simple, it is a very general language for functional computation with control: it can express all the main control constructs such as exceptions and first-class continuations. Proof-theoretically the dual ¿µv-constructs of naming and µ-abstraction witness the introduction and elimination rules of absurdity respectively. Computationally they give succinct expression to a kind of generic (forward) \"jump\" operator, which may be regarded as a unifying control construct for functional computation. Our goal is that ¿µv and µPCFv respectively should be to functional computation with first-class access to the flow of control what ¿-calculus and PCF respectively are to pure functional programming: ¿µv gives the logical basis via the Curry-Howard correspondence, and µPCFv is a prototypical language albeit in purified form.",
"title": ""
},
{
"docid": "f52cde20377d4b8b7554f9973c220d0a",
"text": "A typical method to obtain valuable information is to extract the sentiment or opinion from a message. Machine learning technologies are widely used in sentiment classification because of their ability to “learn” from the training dataset to predict or support decision making with relatively high accuracy. However, when the dataset is large, some algorithms might not scale up well. In this paper, we aim to evaluate the scalability of Naïve Bayes classifier (NBC) in large datasets. Instead of using a standard library (e.g., Mahout), we implemented NBC to achieve fine-grain control of the analysis procedure. A Big Data analyzing system is also design for this study. The result is encouraging in that the accuracy of NBC is improved and approaches 82% when the dataset size increases. We have demonstrated that NBC is able to scale up to analyze the sentiment of millions movie reviews with increasing throughput.",
"title": ""
},
{
"docid": "281323234970e764eff59579220be9b4",
"text": "Methods based on kernel density estimation have been successfully applied for various data mining tasks. Their natural interpretation together with suitable properties make them an attractive tool among others in clustering problems. In this paper, the Complete Gradient Clustering Algorithm has been used to investigate a real data set of grains. The wheat varieties, Kama, Rosa and Canadian, characterized by measurements of main grain geometric features obtained by X-ray technique, have been analyzed. The proposed algorithm is expected to be an effective tool for recognizing wheat varieties. A comparison between the clustering results obtained from this method and the classical k-means clustering algorithm shows positive practical features of the Complete Gradient Clustering Algorithm.",
"title": ""
},
{
"docid": "e872a91433539301a857eab518cacb38",
"text": "Advances in deep reinforcement learning have allowed autonomous agents to perform well on Atari games, often outperforming humans, using only raw pixels to make their decisions. However, most of these games take place in 2D environments that are fully observable to the agent. In this paper, we present Arnold, a completely autonomous agent to play First-Person Shooter Games using only screen pixel data and demonstrate its effectiveness on Doom, a classical firstperson shooter game. Arnold is trained with deep reinforcement learning using a recent Action-Navigation architecture, which uses separate deep neural networks for exploring the map and fighting enemies. Furthermore, it utilizes a lot of techniques such as augmenting high-level game features, reward shaping and sequential updates for efficient training and effective performance. Arnold outperforms average humans as well as in-built game bots on different variations of the deathmatch. It also obtained the highest kill-to-death ratio in both the tracks of the Visual Doom AI Competition and placed second in terms of the number of frags.",
"title": ""
},
{
"docid": "5374ed153eb37e5680f1500fea5b9dbe",
"text": "Social media have become dominant in everyday life during the last few years where users share their thoughts and experiences about their enjoyable events in posts. Most of these posts are related to different categories related to: activities, such as dancing, landscapes, such as beach, people, such as a selfie, and animals such as pets. While some of these posts become popular and get more attention, others are completely ignored. In order to address the desire of users to create popular posts, several researches have studied post popularity prediction. Existing works focus on predicting the popularity without considering the category type of the post. In this paper we propose category specific post popularity prediction using visual and textual content for action, scene, people and animal categories. In this way we aim to answer the question What makes a post belonging to a specific action, scene, people or animal category popular? To answer to this question we perform several experiments on a collection of 65K posts crawled from Instagram.",
"title": ""
},
{
"docid": "1af028a0cf88d0ac5c52e84019554d51",
"text": "Robots exhibit life-like behavior by performing intelligent actions. To enhance human-robot interaction it is necessary to investigate and understand how end-users perceive such animate behavior. In this paper, we report an experiment to investigate how people perceived different robot embodiments in terms of animacy and intelligence. iCat and Robovie II were used as the two embodiments in this experiment. We conducted a between-subject experiment where robot type was the independent variable, and perceived animacy and intelligence of the robot were the dependent variables. Our findings suggest that a robots perceived intelligence is significantly correlated with animacy. The correlation between the intelligence and the animacy of a robot was observed to be stronger in the case of the iCat embodiment. Our results also indicate that the more animated the face of the robot, the more likely it is to attract the attention of a user. We also discuss the possible and probable explanations of the results obtained.",
"title": ""
},
{
"docid": "c2fc81074ceed3d7c3690a4b23f7624e",
"text": "The diffusion model for 2-choice decisions (R. Ratcliff, 1978) was applied to data from lexical decision experiments in which word frequency, proportion of high- versus low-frequency words, and type of nonword were manipulated. The model gave a good account of all of the dependent variables--accuracy, correct and error response times, and their distributions--and provided a description of how the component processes involved in the lexical decision task were affected by experimental variables. All of the variables investigated affected the rate at which information was accumulated from the stimuli--called drift rate in the model. The different drift rates observed for the various classes of stimuli can all be explained by a 2-dimensional signal-detection representation of stimulus information. The authors discuss how this representation and the diffusion model's decision process might be integrated with current models of lexical access.",
"title": ""
},
{
"docid": "e3a2b7d38a777c0e7e06d2dc443774d5",
"text": "The area under the ROC (Receiver Operating Characteristic) curve, or simply AUC, has been widely used to measure model performance for binary classification tasks. It can be estimated under parametric, semiparametric and nonparametric assumptions. The non-parametric estimate of the AUC, which is calculated from the ranks of predicted scores of instances, does not always sufficiently take advantage of the predicted scores. This problem is tackled in this paper. On the basis of the ranks and the original values of the predicted scores, we introduce a new metric, called a scored AUC or sAUC. Experimental results on 20 UCI data sets empirically demonstrate the validity of the new metric for classifier evaluation and selection.",
"title": ""
},
{
"docid": "fb1d1c291b175c1fc788832fec008664",
"text": "In Vehicular Ad Hoc Networks (VANETs), anonymity of the nodes sending messages should be preserved, while at the same time the law enforcement agencies should be able to trace the messages to the senders when necessary. It is also necessary that the messages sent are authenticated and delivered to the vehicles in the relevant areas quickly. In this paper, we present an efficient protocol for fast dissemination of authenticated messages in VANETs. It ensures the anonymity of the senders and also provides mechanism for law enforcement agencies to trace the messages to their senders, when necessary.",
"title": ""
},
{
"docid": "45940a48b86645041726120fb066a1fa",
"text": "For large state-space Markovian Decision Problems MonteCarlo planning is one of the few viable approaches to find near-optimal solutions. In this paper we introduce a new algorithm, UCT, that applies bandit ideas to guide Monte-Carlo planning. In finite-horizon or discounted MDPs the algorithm is shown to be consistent and finite sample bounds are derived on the estimation error due to sampling. Experimental results show that in several domains, UCT is significantly more efficient than its alternatives.",
"title": ""
},
{
"docid": "1e6c2319e7c9e51cd4e31107d56bce91",
"text": "Marketing has been criticised from all spheres today since the real worth of all the marketing efforts can hardly be precisely determined. Today consumers are better informed and also misinformed at times due to the bombardment of various pieces of information through a new type of interactive media, i.e., social media (SM). In SM, communication is through dialogue channels wherein consumers pay more attention to SM buzz rather than promotions of marketers. The various forms of SM create a complex set of online social networks (OSN), through which word-of-mouth (WOM) propagates and influence consumer decisions. With the growth of OSN and user generated contents (UGC), WOM metamorphoses to electronic word-of-mouth (eWOM), which spreads in astronomical proportions. Previous works study the effect of external and internal influences in affecting consumer behaviour. However, today the need is to resort to multidisciplinary approaches to find out how SM influence consumers with eWOM and online reviews. This paper reviews the emerging trend of how multiple disciplines viz. Statistics, Data Mining techniques, Network Analysis, etc. are being integrated by marketers today to analyse eWOM and derive actionable intelligence.",
"title": ""
},
{
"docid": "b9a214ad1b6a97eccf6c14d3d778b2ff",
"text": "In this paper a morphological tagging approach for document image invoice analysis is described. Tokens close by their morphology and confirmed in their location within different similar contexts make apparent some parts of speech representative of the structure elements. This bottom up approach avoids the use of an priori knowledge provided that there are redundant and frequent contexts in the text. The approach is applied on the invoice body text roughly recognized by OCR and automatically segmented. The method makes possible the detection of the invoice articles and their different fields. The regularity of the article composition and its redundancy in the invoice is a good help for its structure. The recognition rate of 276 invoices and 1704 articles, is over than 91.02% for articles and 92.56% for fields.",
"title": ""
},
{
"docid": "caf1a9d9b00e7d2c79a2869b17aa7292",
"text": "Human activity recognition using mobile device sensors is an active area of research in pervasive computing. In our work, we aim at implementing activity recognition approaches that are suitable for real life situations. This paper focuses on the problem of recognizing the on-body position of the mobile device which in a real world setting is not known a priori. We present a new real world data set that has been collected from 15 participants for 8 common activities were they carried 7 wearable devices in different positions. Further, we introduce a device localization method that uses random forest classifiers to predict the device position based on acceleration data. We perform the most complete experiment in on-body device location that includes all relevant device positions for the recognition of a variety of different activities. We show that the method outperforms other approaches achieving an F-Measure of 89% across different positions. We also show that the detection of the device position consistently improves the result of activity recognition for common activities.",
"title": ""
},
{
"docid": "52e0f106480635b84339c21d1a24dcde",
"text": "We propose a fast, parallel, maximum clique algorithm for large, sparse graphs that is designed to exploit characteristics of social and information networks. We observe roughly linear runtime scaling over graphs between 1000 vertices and 100M vertices. In a test with a 1.8 billion-edge social network, the algorithm finds the largest clique in about 20 minutes. For social networks, in particular, we found that using the core number of a vertex in combination with a good heuristic clique finder efficiently removes the vast majority of the search space. In addition, we parallelize the exploration of the search tree. In the algorithm, processes immediately communicate changes to upper and lower bounds on the size of maximum clique, which occasionally results in a super-linear speedup because vertices with especially large search spaces can be pruned by other processes. We use this clique finder to investigate the size of the largest temporal strong components in dynamic networks, which requires finding the largest clique in a particular temporal reachability graph.",
"title": ""
},
{
"docid": "673cf83a9e08ed4e70b6cb706e0ffc5b",
"text": "Conversation systems are of growing importance since they enable an easy interaction interface between humans and computers: using natural languages. To build a conversation system with adequate intelligence is challenging, and requires abundant resources including an acquisition of big data and interdisciplinary techniques, such as information retrieval and natural language processing. Along with the prosperity of Web 2.0, the massive data available greatly facilitate data-driven methods such as deep learning for human-computer conversation systems. Owing to the diversity of Web resources, a retrieval-based conversation system will come up with at least some results from the immense repository for any user inputs. Given a human issued message, i.e., query, a traditional conversation system would provide a response after adequate training and learning of how to respond. In this paper, we propose a new task for conversation systems: joint learning of response ranking featured with next utterance suggestion. We assume that the new conversation mode is more proactive and keeps user engaging. We examine the assumption in experiments. Besides, to address the joint learning task, we propose a novel Dual-LSTM Chain Model to couple response ranking and next utterance suggestion simultaneously. From the experimental results, we demonstrate the usefulness of the proposed task and the effectiveness of the proposed model.",
"title": ""
},
{
"docid": "cef4c47b512eb4be7dcadcee35f0b2ca",
"text": "This paper presents a project that allows the Baxter humanoid robot to play chess against human players autonomously. The complete solution uses three main subsystems: computer vision based on a single camera embedded in Baxter's arm to perceive the game state, an open-source chess engine to compute the next move, and a mechatronics subsystem with a 7-DOF arm to manipulate the pieces. Baxter can play chess successfully in unconstrained environments by dynamically responding to changes in the environment. This implementation demonstrates Baxter's capabilities of vision-based adaptive control and small-scale manipulation, which can be applicable to numerous applications, while also contributing to the computer vision chess analysis literature.",
"title": ""
},
{
"docid": "f74ccd06a302b70980d7b3ba2ee76cfb",
"text": "As the world becomes more connected to the cyber world, attackers and hackers are becoming increasingly sophisticated to penetrate computer systems and networks. Intrusion Detection System (IDS) plays a vital role in defending a network against intrusion. Many commercial IDSs are available in marketplace but with high cost. At the same time open source IDSs are also available with continuous support and upgradation from large user community. Each of these IDSs adopts a different approaches thus may target different applications. This paper provides a quick review of six Open Source IDS tools so that one can choose the appropriate Open Source IDS tool as per their organization requirements.",
"title": ""
}
] | scidocsrr |
19e7b796871086d407576d1f0ef80d83 | Bidirectional Single-Stage Grid-Connected Inverter for a Battery Energy Storage System | [
{
"docid": "f1e9c9106dd3cdd7b568d5513b39ac7a",
"text": "This paper presents a novel zero-voltage switching (ZVS) approach to a grid-connected single-stage flyback inverter. The soft-switching of the primary switch is achieved by allowing negative current from the grid side through bidirectional switches placed on the secondary side of the transformer. Basically, the negative current discharges the metal-oxide-semiconductor field-effect transistor's output capacitor, thereby allowing turn on of the primary switch under zero voltage. To optimize the amount of reactive current required to achieve ZVS, a variable-frequency control scheme is implemented over the line cycle. In addition, the bidirectional switches on the secondary side of the transformer have ZVS during the turn- on times. Therefore, the switching losses of the bidirectional switches are negligible. A 250-W prototype has been implemented to validate the proposed scheme. Experimental results confirm the feasibility and superior performance of the converter compared with the conventional flyback inverter.",
"title": ""
},
{
"docid": "5042532d025cd5bdb21893a2c2e9f9b4",
"text": "This paper presents an energy sharing state-of-charge (SOC) balancing control scheme based on a distributed battery energy storage system architecture where the cell balancing system and the dc bus voltage regulation system are combined into a single system. The battery cells are decoupled from one another by connecting each cell with a small lower power dc-dc power converter. The small power converters are utilized to achieve both SOC balancing between the battery cells and dc bus voltage regulation at the same time. The battery cells' SOC imbalance issue is addressed from the root by using the energy sharing concept to automatically adjust the discharge/charge rate of each cell while maintaining a regulated dc bus voltage. Consequently, there is no need to transfer the excess energy between the cells for SOC balancing. The theoretical basis and experimental prototype results are provided to illustrate and validate the proposed energy sharing controller.",
"title": ""
}
] | [
{
"docid": "9dd6d9f5643c4884e981676230f3ee66",
"text": "A rank-r matrix X ∈ Rm×n can be written as a product UV >, where U ∈ Rm×r and V ∈ Rn×r. One could exploit this observation in optimization: e.g., consider the minimization of a convex function f(X) over rank-r matrices, where the scaffold of rank-r matrices is modeled via the factorization in U and V variables. Such heuristic has been widely used before for specific problem instances, where the solution sought is (approximately) low-rank. Though such parameterization reduces the number of variables and is more efficient in computational speed and memory requirement (of particular interest is the case r min{m,n}), it comes at a cost: f(UV >) becomes a non-convex function w.r.t. U and V . In this paper, we study such parameterization in optimization of generic convex f and focus on first-order, gradient descent algorithmic solutions. We propose an algorithm we call the Bi-Factored Gradient Descent (BFGD) algorithm, an efficient first-order method that operates on the U, V factors. We show that when f is smooth, BFGD has local sublinear convergence, and linear convergence when f is both smooth and strongly convex. Moreover, for several key applications, we provide simple and efficient initialization schemes that provide approximate solutions good enough for the above convergence results to hold.",
"title": ""
},
{
"docid": "d5e573802d6519a8da402f2e66064372",
"text": "Targeted cyberattacks play an increasingly significant role in disrupting the online social and economic model, not to mention the threat they pose to nation-states. A variety of components and techniques come together to bring about such attacks.",
"title": ""
},
{
"docid": "074d9b68f1604129bcfdf0bb30bbd365",
"text": "This paper describes a methodology for semi-supervised learning of dialogue acts using the similarity between sentences. We suppose that the dialogue sentences with the same dialogue act are more similar in terms of semantic and syntactic information. However, previous work on sentence similarity mainly modeled a sentence as bag-of-words and then compared different groups of words using corpus-based or knowledge-based measurements of word semantic similarity. Novelly, we present a vector-space sentence representation, composed of word embeddings, that is, the related word distributed representations, and these word embeddings are organised in a sentence syntactic structure. Given the vectors of the dialogue sentences, a distance measurement can be well-defined to compute the similarity between them. Finally, a seeded k-means clustering algorithm is implemented to classify the dialogue sentences into several categories corresponding to particular dialogue acts. This constitutes the semi-supervised nature of the approach, which aims to ameliorate the reliance of the availability of annotated corpora. Experiments with Switchboard Dialog Act corpus show that classification accuracy is improved by 14%, compared to the state-of-art methods based on Support Vector Machine.",
"title": ""
},
{
"docid": "e1958dc823feee7f88ab5bf256655bee",
"text": "We describe an approach for testing a software system for possible securi ty flaws. Traditionally, security testing is done using penetration analysis and formal methods. Based on the observation that most security flaws are triggered due to a flawed interaction with the envi ronment, we view the security testing problem as the problem of testing for the fault-tolerance prop erties of a software system. We consider each environment perturbation as a fault and the resulting security ompromise a failure in the toleration of such faults. Our approach is based on the well known techn ique of fault-injection. Environment faults are injected into the system under test and system beha vior observed. The failure to tolerate faults is an indicator of a potential security flaw in the syst em. An Environment-Application Interaction (EAI) fault model is proposed. EAI allows us to decide what f aults to inject. Based on EAI, we present a security-flaw classification scheme. This scheme was used to classif y 142 security flaws in a vulnerability database. This classification revealed that 91% of the security flaws in the database are covered by the EAI model.",
"title": ""
},
{
"docid": "b5af51c869fa4863dfa581b0fb8cc20a",
"text": "This paper describes progress toward a prototype implementation of a tool which aims to improve literacy in deaf high school and college students who are native (or near native) signers of American Sign Language (ASL). We envision a system that will take a piece of text written by a deaf student, analyze that text for grammatical errors, and engage that student in a tutorial dialogue, enabling the student to generate appropriate corrections to the text. A strong focus of this work is to develop a system which adapts this process to the knowledge level and learning strengths of the user and which has the flexibility to engage in multi-modal, multilingual tutorial instruction utilizing both English and the native language of the user.",
"title": ""
},
{
"docid": "7f6e966f3f924e18cb3be0ae618309e6",
"text": "designed shapes incorporating typedesign tradition, the rules related to visual appearance, and the design ideas of a skilled character designer. The typographic design process is structured and systematic: letterforms are visually related in weight, contrast, space, alignment, and style. To create a new typeface family, type designers generally start by designing a few key characters—such as o, h, p, and v— incorporating the most important structure elements such as vertical stems, round parts, diagonal bars, arches, and serifs (see Figure 1). They can then use the design features embedded into these structure elements (stem width, behavior of curved parts, contrast between thick and thin shape parts, and so on) to design the font’s remaining characters. Today’s industrial font description standards such as Adobe Type 1 or TrueType represent typographic characters by their shape outlines, because of the simplicity of digitizing the contours of well-designed, large-size master characters. However, outline characters only implicitly incorporate the designer’s intentions. Because their structure elements aren’t explicit, creating aesthetically appealing derived designs requiring coherent changes in character width, weight (boldness), and contrast is difficult. Outline characters aren’t suitable for optical scaling, which requires relatively fatter letter shapes at small sizes. Existing approaches for creating derived designs from outline fonts require either specifying constraints to maintain the coherence of structure elements across different characters or creating multiple master designs for the interpolation of derived designs. We present a new approach for describing and synthesizing typographic character shapes. Instead of describing characters by their outlines, we conceive each character as an assembly of structure elements (stems, bars, serifs, round parts, and arches) implemented by one or several shape components. We define the shape components by typeface-category-dependent global parameters such as the serif and junction types, by global font-dependent metrics such as the location of reference lines and the width of stems and curved parts, and by group and local parameters. (See the sidebar “Previous Work” for background information on the field of parameterizable fonts.)",
"title": ""
},
{
"docid": "b527ade4819e314a723789de58280724",
"text": "Securing collaborative filtering systems from malicious attack has become an important issue with increasing popularity of recommender Systems. Since recommender systems are entirely based on the input provided by the users or customers, they tend to become highly vulnerable to outside attacks. Prior research has shown that attacks can significantly affect the robustness of the systems. To prevent such attacks, researchers proposed several unsupervised detection mechanisms. While these approaches produce satisfactory results in detecting some well studied attacks, they are not suitable for all types of attacks studied recently. In this paper, we show that the unsupervised clustering can be used effectively for attack detection by computing detection attributes modeled on basic descriptive statistics. We performed extensive experiments and discussed different approaches regarding their performances. Our experimental results showed that attribute-based unsupervised clustering algorithm can detect spam users with a high degree of accuracy and fewer misclassified genuine users regardless of attack strategies.",
"title": ""
},
{
"docid": "73e4fed83bf8b1f473768ce15d6a6a86",
"text": "Improving science, technology, engineering, and mathematics (STEM) education, especially for traditionally disadvantaged groups, is widely recognized as pivotal to the U.S.'s long-term economic growth and security. In this article, we review and discuss current research on STEM education in the U.S., drawing on recent research in sociology and related fields. The reviewed literature shows that different social factors affect the two major components of STEM education attainment: (1) attainment of education in general, and (2) attainment of STEM education relative to non-STEM education conditional on educational attainment. Cognitive and social psychological characteristics matter for both major components, as do structural influences at the neighborhood, school, and broader cultural levels. However, while commonly used measures of socioeconomic status (SES) predict the attainment of general education, social psychological factors are more important influences on participation and achievement in STEM versus non-STEM education. Domestically, disparities by family SES, race, and gender persist in STEM education. Internationally, American students lag behind those in some countries with less economic resources. Explanations for group disparities within the U.S. and the mediocre international ranking of US student performance require more research, a task that is best accomplished through interdisciplinary approaches.",
"title": ""
},
{
"docid": "7c5f2c92cb3d239674f105a618de99e0",
"text": "We consider the isolated spelling error correction problem as a specific subproblem of the more general string-to-string translation problem. In this context, we investigate four general string-to-string transformation models that have been suggested in recent years and apply them within the spelling error correction paradigm. In particular, we investigate how a simple ‘k-best decoding plus dictionary lookup’ strategy performs in this context and find that such an approach can significantly outdo baselines such as edit distance, weighted edit distance, and the noisy channel Brill and Moore model to spelling error correction. We also consider elementary combination techniques for our models such as language model weighted majority voting and center string combination. Finally, we consider real-world OCR post-correction for a dataset sampled from medieval Latin texts.",
"title": ""
},
{
"docid": "4a5abe07b93938e7549df068967731fc",
"text": "A novel compact dual-polarized unidirectional wideband antenna based on two crossed magneto-electric dipoles is proposed. The proposed miniaturization method consist in transforming the electrical filled square dipoles into vertical folded square loops. The surface of the radiating element is reduced to 0.23λ0∗0.23λ0, where λ0 is the wavelength at the lowest operation frequency for a standing wave ratio (SWR) <2.5, which corresponds to a reduction factor of 48%. The antenna has been prototyped using 3D printing technology. The measured input impedance bandwidth is 51.2% from 1.7 GHz to 2.9 GHz with a Standing wave ratio (SWR) <2.",
"title": ""
},
{
"docid": "1331dc5705d4b416054341519126f32f",
"text": "There is a large tradition of work in moral psychology that explores the capacity for moral judgment by focusing on the basic capacity to distinguish moral violations (e.g. hitting another person) from conventional violations (e.g. playing with your food). However, only recently have there been attempts to characterize the cognitive mechanisms underlying moral judgment (e.g. Cognition 57 (1995) 1; Ethics 103 (1993) 337). Recent evidence indicates that affect plays a crucial role in mediating the capacity to draw the moral/conventional distinction. However, the prevailing account of the role of affect in moral judgment is problematic. This paper argues that the capacity to draw the moral/conventional distinction depends on both a body of information about which actions are prohibited (a Normative Theory) and an affective mechanism. This account leads to the prediction that other normative prohibitions that are connected to an affective mechanism might be treated as non-conventional. An experiment is presented that indicates that \"disgust\" violations (e.g. spitting at the table), are distinguished from conventional violations along the same dimensions as moral violations.",
"title": ""
},
{
"docid": "ad5b787fd972c202a69edc98a8fbc7ba",
"text": "BACKGROUND\nIntimate partner violence (IPV) is a major public health problem with serious consequences for women's physical, mental, sexual and reproductive health. Reproductive health outcomes such as unwanted and terminated pregnancies, fetal loss or child loss during infancy, non-use of family planning methods, and high fertility are increasingly recognized. However, little is known about the role of community influences on women's experience of IPV and its effect on terminated pregnancy, given the increased awareness of IPV being a product of social context. This study sought to examine the role of community-level norms and characteristics in the association between IPV and terminated pregnancy in Nigeria.\n\n\nMETHODS\nMultilevel logistic regression analyses were performed on nationally-representative cross-sectional data including 19,226 women aged 15-49 years in Nigeria. Data were collected by a stratified two-stage sampling technique, with 888 primary sampling units (PSUs) selected in the first sampling stage, and 7,864 households selected through probability sampling in the second sampling stage.\n\n\nRESULTS\nWomen who had experienced physical IPV, sexual IPV, and any IPV were more likely to have terminated a pregnancy compared to women who had not experienced these IPV types.IPV types were significantly associated with factors reflecting relationship control, relationship inequalities, and socio-demographic characteristics. Characteristics of the women aggregated at the community level (mean education, justifying wife beating, mean age at first marriage, and contraceptive use) were significantly associated with IPV types and terminated pregnancy.\n\n\nCONCLUSION\nFindings indicate the role of community influence in the association between IPV-exposure and terminated pregnancy, and stress the need for screening women seeking abortions for a history of abuse.",
"title": ""
},
{
"docid": "20718ae394b5f47387499e5f3360a888",
"text": "Crowding, the inability to recognize objects in clutter, sets a fundamental limit on conscious visual perception and object recognition throughout most of the visual field. Despite how widespread and essential it is to object recognition, reading and visually guided action, a solid operational definition of what crowding is has only recently become clear. The goal of this review is to provide a broad-based synthesis of the most recent findings in this area, to define what crowding is and is not, and to set the stage for future work that will extend our understanding of crowding well beyond low-level vision. Here we define six diagnostic criteria for what counts as crowding, and further describe factors that both escape and break crowding. All of these lead to the conclusion that crowding occurs at multiple stages in the visual hierarchy.",
"title": ""
},
{
"docid": "e5ce1ddd50a728fab41043324938a554",
"text": "B-trees are used by many file systems to represent files and directories. They provide guaranteed logarithmic time key-search, insert, and remove. File systems like WAFL and ZFS use shadowing, or copy-on-write, to implement snapshots, crash recovery, write-batching, and RAID. Serious difficulties arise when trying to use b-trees and shadowing in a single system.\n This article is about a set of b-tree algorithms that respects shadowing, achieves good concurrency, and implements cloning (writeable snapshots). Our cloning algorithm is efficient and allows the creation of a large number of clones.\n We believe that using our b-trees would allow shadowing file systems to better scale their on-disk data structures.",
"title": ""
},
{
"docid": "54234eef5d56951e408d2a163dfd27f8",
"text": "In many applications of wireless sensor networks (WSNs), node location is required to locate the monitored event once occurs. Mobility-assisted localization has emerged as an efficient technique for node localization. It works on optimizing a path planning of a location-aware mobile node, called mobile anchor (MA). The task of the MA is to traverse the area of interest (network) in a way that minimizes the localization error while maximizing the number of successful localized nodes. For simplicity, many path planning models assume that the MA has a sufficient source of energy and time, and the network area is obstacle-free. However, in many real-life applications such assumptions are rare. When the network area includes many obstacles, which need to be avoided, and the MA itself has a limited movement distance that cannot be exceeded, a dynamic movement approach is needed. In this paper, we propose two novel dynamic movement techniques that offer obstacle-avoidance path planning for mobility-assisted localization in WSNs. The movement planning is designed in a real-time using two swarm intelligence based algorithms, namely grey wolf optimizer and whale optimization algorithm. Both of our proposed models, grey wolf optimizer-based path planning and whale optimization algorithm-based path planning, provide superior outcomes in comparison to other existing works in several metrics including both localization ratio and localization error rate.",
"title": ""
},
{
"docid": "a488509590cd496669bdcc3ce8cc5fe5",
"text": "Ghrelin is an endogenous ligand for the growth hormone secretagogue receptor and a well-characterized food intake regulatory peptide. Hypothalamic ghrelin-, neuropeptide Y (NPY)-, and orexin-containing neurons form a feeding regulatory circuit. Orexins and NPY are also implicated in sleep-wake regulation. Sleep responses and motor activity after central administration of 0.2, 1, or 5 microg ghrelin in free-feeding rats as well as in feeding-restricted rats (1 microg dose) were determined. Food and water intake and behavioral responses after the light onset injection of saline or 1 microg ghrelin were also recorded. Light onset injection of ghrelin suppressed non-rapid-eye-movement sleep (NREMS) and rapid-eye-movement sleep (REMS) for 2 h. In the first hour, ghrelin induced increases in behavioral activity including feeding, exploring, and grooming and stimulated food and water intake. Ghrelin administration at dark onset also elicited NREMS and REMS suppression in hours 1 and 2, but the effect was not as marked as that, which occurred in the light period. In hours 3-12, a secondary NREMS increase was observed after some doses of ghrelin. In the feeding-restricted rats, ghrelin suppressed NREMS in hours 1 and 2 and REMS in hours 3-12. Data are consistent with the notion that ghrelin has a role in the integration of feeding, metabolism, and sleep regulation.",
"title": ""
},
{
"docid": "7b27d8b8f05833888b9edacf9ace0a18",
"text": "This paper reports results from a study on the adoption of an information visualization system by administrative data analysts. Despite the fact that the system was neither fully integrated with their current software tools nor with their existing data analysis practices, analysts identified a number of key benefits that visualization systems provide to their work. These benefits for the most part occurred when analysts went beyond their habitual and well-mastered data analysis routines and engaged in creative discovery processes. We analyze the conditions under which these benefits arose, to inform the design of visualization systems that can better assist the work of administrative data analysts.",
"title": ""
},
{
"docid": "8a7f4cde54d120aab50c9d4f45e67a43",
"text": "The purpose of this study was to assess the perceived discomfort of patrol officers related to equipment and vehicle design and whether there were discomfort differences between day and night shifts. A total of 16 participants were recruited (10 males, 6 females) from a local police force to participate for one full day shift and one full night shift. A series of questionnaires were administered to acquire information regarding comfort with specific car features and occupational gear, body part discomfort and health and lifestyle. The discomfort questionnaires were administered three times during each shift to monitor discomfort progression within a shift. Although there were no significant discomfort differences reported between the day and night shifts, perceived discomfort was identified for specific equipment, vehicle design and vehicle configuration, within each 12-h shift.",
"title": ""
},
{
"docid": "6150e19bffad5629c6d5cb7439663b13",
"text": "We present NeuroLinear, a system for extracting oblique decision rules from neural networks that have been trained for classiication of patterns. Each condition of an oblique decision rule corresponds to a partition of the attribute space by a hyperplane that is not necessarily axis-parallel. Allowing a set of such hyperplanes to form the boundaries of the decision regions leads to a signiicant reduction in the number of rules generated while maintaining the accuracy rates of the networks. We describe the components of NeuroLinear in detail by way of two examples using artiicial datasets. Our experimental results on real-world datasets show that the system is eeective in extracting compact and comprehensible rules with high predictive accuracy from neural networks.",
"title": ""
},
{
"docid": "ab0c80a10d26607134828c6b350089aa",
"text": "Parkinson's disease (PD) is a neurodegenerative disorder with symptoms that progressively worsen with age. Pathologically, PD is characterized by the aggregation of α-synuclein in cells of the substantia nigra in the brain and loss of dopaminergic neurons. This pathology is associated with impaired movement and reduced cognitive function. The etiology of PD can be attributed to a combination of environmental and genetic factors. A popular animal model, the nematode roundworm Caenorhabditis elegans, has been frequently used to study the role of genetic and environmental factors in the molecular pathology and behavioral phenotypes associated with PD. The current review summarizes cellular markers and behavioral phenotypes in transgenic and toxin-induced PD models of C. elegans.",
"title": ""
}
] | scidocsrr |
1b8ab416e44c8d94d782589e19c50540 | What Is the Evidence to Support the Use of Therapeutic Gardens for the Elderly? | [
{
"docid": "a86114aeee4c0bc1d6c9a761b50217d4",
"text": "OBJECTIVE\nThe purpose of this study was to investigate the effect of antidepressant treatment on hippocampal volumes in patients with major depression.\n\n\nMETHOD\nFor 38 female outpatients, the total time each had been in a depressive episode was divided into days during which the patient was receiving antidepressant medication and days during which no antidepressant treatment was received. Hippocampal gray matter volumes were determined by high resolution magnetic resonance imaging and unbiased stereological measurement.\n\n\nRESULTS\nLonger durations during which depressive episodes went untreated with antidepressant medication were associated with reductions in hippocampal volume. There was no significant relationship between hippocampal volume loss and time depressed while taking antidepressant medication or with lifetime exposure to antidepressants.\n\n\nCONCLUSIONS\nAntidepressants may have a neuroprotective effect during depression.",
"title": ""
}
] | [
{
"docid": "53a033a068a51cfa0b025c2cae508702",
"text": "In a grid connected photovoltaic system, the main aim is to design an efficient solar inverter with higher efficiency and which also controls the power that the inverter injects into the grid. The effectiveness of the general PV system anticipate on the productivity by which the direct current of the solar module is changed over into alternating current. The fundamental requirement to interface the solar module to the grid with increased productivity includes: Low THD of current injected to the grid, maximum power point, and high power factor. In this paper, a two stage topology without galvanic isolation is been carried out for a single phase grid connected photovoltaic inverter. The output from the PV panel is given to the DC/DC boost converter, maximum power point tracking (MPPT) control technique is being used to control the gate pulse of the IGBT of boost converter. The boosted output is fed to the highly efficient and reliable inverter concept (HERIC) inverter in order to convert DC into AC with higher efficiency.",
"title": ""
},
{
"docid": "d3ac465b3271e81f735086a2359fca9b",
"text": "Computing a curve to approximate data points is a problem encountered frequently in many applications in computer graphics, computer vision, CAD/CAM, and image processing. We present a novel and efficient method, called squared distance minimization (SDM), for computing a planar B-spline curve, closed or open, to approximate a target shape defined by a point cloud, that is, a set of unorganized, possibly noisy data points. We show that SDM significantly outperforms other optimization methods used currently in common practice of curve fitting. In SDM, a B-spline curve starts from some properly specified initial shape and converges towards the target shape through iterative quadratic minimization of the fitting error. Our contribution is the introduction of a new fitting error term, called the squared distance (SD) error term, defined by a curvature-based quadratic approximant of squared distances from data points to a fitting curve. The SD error term faithfully measures the geometric distance between a fitting curve and a target shape, thus leading to faster and more stable convergence than the point distance (PD) error term, which is commonly used in computer graphics and CAGD, and the tangent distance (TD) error term, which is often adopted in the computer vision community. To provide a theoretical explanation of the superior performance of SDM, we formulate the B-spline curve fitting problem as a nonlinear least squares problem and conclude that SDM is a quasi-Newton method which employs a curvature-based positive definite approximant to the true Hessian of the objective function. Furthermore, we show that the method based on the TD error term is a Gauss-Newton iteration, which is unstable for target shapes with high curvature variations, whereas optimization based on the PD error term is the alternating method that is known to have linear convergence.",
"title": ""
},
{
"docid": "3a2168e93c1f8025e93de1a7594e17d5",
"text": "1 Multisensor Data Fusion for Next Generation Distributed Intrusion Detection Systems Tim Bass ERIM International & Silk Road Ann Arbor, MI 48113 Abstract| Next generation cyberspace intrusion detection systems will fuse data from heterogeneous distributed network sensors to create cyberspace situational awareness. This paper provides a few rst steps toward developing the engineering requirements using the art and science of multisensor data fusion as the underlying model. Current generation internet-based intrusion detection systems and basic multisensor data fusion constructs are summarized. The TCP/IP model is used to develop framework sensor and database models. The SNMP ASN.1 MIB construct is recommended for the representation of context-dependent threat & vulnerabilities databases.",
"title": ""
},
{
"docid": "bd0e01675a12193752588e6bc730edd5",
"text": "Online safety is everyone's responsibility---a concept much easier to preach than to practice.",
"title": ""
},
{
"docid": "bf707a96f7059b4c4f62d38255bb8333",
"text": "We present a system to detect passenger cars in aerial images along the road directions where cars appear as small objects. We pose this as a 3D object recognition problem to account for the variation in viewpoint and the shadow. We started from psychological tests to find important features for human detection of cars. Based on these observations, we selected the boundary of the car body, the boundary of the front windshield, and the shadow as the features. Some of these features are affected by the intensity of the car and whether or not there is a shadow along it. This information is represented in the structure of the Bayesian network that we use to integrate all features. Experiments show very promising results even on some very challenging images.",
"title": ""
},
{
"docid": "71cf493e0026fe057b1100c5ad1118ad",
"text": "We explore story generation: creative systems that can build coherent and fluent passages of text about a topic. We collect a large dataset of 300K human-written stories paired with writing prompts from an online forum. Our dataset enables hierarchical story generation, where the model first generates a premise, and then transforms it into a passage of text. We gain further improvements with a novel form of model fusion that improves the relevance of the story to the prompt, and adding a new gated multi-scale self-attention mechanism to model long-range context. Experiments show large improvements over strong baselines on both automated and human evaluations. Human judges prefer stories generated by our approach to those from a strong non-hierarchical model by a factor of two to one.",
"title": ""
},
{
"docid": "beedf5250dccbb0cf021618532dd98f6",
"text": "This paper deals with the problem of gender classification using fingerprint images. Our attempt to gender identification follows the use of machine learning to determine the differences between fingerprint images. Each image in the database was represented by a feature vector consisting of ridge thickness to valley thickness ratio (RTVTR) and the ridge density values. By using a support vector machine trained on a set of 150 male and 125 female images, we obtain a robust classifying function for male and female feature vector patterns.",
"title": ""
},
{
"docid": "b891bf4de3d1060b723c7a2e443acd10",
"text": "For a dynamic network based large vocabulary continuous speech recognizer, this paper proposes a fast language model (LM) look-ahead method using extended N -gram model. The extended N -gram model unifies the representations and score computations of the LM and the LM look-ahead tree, and thus greatly simplifies the decoder implementation and improves the LM look-ahead speed significantly, which makes higher-order LM look-ahead possible. The extended N -gram model is generated off-line before decoding starts. The generation procedure makes use of sparseness of backing-off N -gram models for efficient look-ahead score computation, and uses word-end node pushing and score quantitation to compact the model′s storage space. Experiments showed that with the same character error rate, the proposed method speeded up the overall recognition speed by a factor of 5∼ 9 than the traditional dynamic programming method which computes LM look-ahead scores on-line during the decoding process, and that using higher-order LM look-ahead algorithm can achieve a faster decoding speed and better accuracy than using the lower-order look-ahead ones.",
"title": ""
},
{
"docid": "2710a25b3cf3caf5ebd5fb9f08c9e5e3",
"text": "Your use of the JSTOR archive indicates your acceptance of JSTOR's Terms and Conditions of Use, available at http://www.jstor.org/page/info/about/policies/terms.jsp. JSTOR's Terms and Conditions of Use provides, in part, that unless you have obtained prior permission, you may not download an entire issue of a journal or multiple copies of articles, and you may use content in the JSTOR archive only for your personal, non-commercial use.",
"title": ""
},
{
"docid": "ef2cc160033a30ed1341b45468d93464",
"text": "A number of issues can affect sample size in qualitative research; however, the guiding principle should be the concept of saturation. This has been explored in detail by a number of authors but is still hotly debated, and some say little understood. A sample of PhD studies using qualitative approaches, and qualitative interviews as the method of data collection was taken from theses.com and contents analysed for their sample sizes. Five hundred and sixty studies were identified that fitted the inclusion criteria. Results showed that the mean sample size was 31; however, the distribution was non-random, with a statistically significant proportion of studies, presenting sample sizes that were multiples of ten. These results are discussed in relation to saturation. They suggest a pre-meditated approach that is not wholly congruent with the principles of qualitative research.",
"title": ""
},
{
"docid": "913ea886485fae9b567146532ca458ac",
"text": "This article presents a new method to illustrate the feasibility of 3D topology creation. We base the 3D construction process on testing real cases of implementation of 3D parcels construction in a 3D cadastral system. With the utilization and development of dense urban space, true 3D geometric volume primitives are needed to represent 3D parcels with the adjacency and incidence relationship. We present an effective straightforward approach to identifying and constructing the valid volumetric cadastral object from the given faces, and build the topological relationships among 3D cadastral objects on-thefly, based on input consisting of loose boundary 3D faces made by surveyors. This is drastically different from most existing methods, which focus on the validation of single volumetric objects after the assumption of the object’s creation. Existing methods do not support the needed types of geometry/ topology (e.g. non 2-manifold, singularities) and how to create and maintain valid 3D parcels is still a challenge in practice. We will show that the method does not change the faces themselves and faces in a given input are independently specified. Various volumetric objects, including non-manifold 3D cadastral objects (legal spaces), can be constructed correctly by this method, as will be shown from the",
"title": ""
},
{
"docid": "988b56fdbfd0fbb33bb715adb173c63c",
"text": "This paper presents a new sensing system for home-based rehabilitation based on optical linear encoder (OLE), in which the motion of an optical encoder on a code strip is converted to the limb joints' goniometric data. A body sensing module was designed, integrating the OLE and an accelerometer. A sensor network of three sensing modules was established via controller area network bus to capture human arm motion. Experiments were carried out to compare the performance of the OLE module with that of commercial motion capture systems such as electrogoniometers and fiber-optic sensors. The results show that the inexpensive and simple-design OLE's performance is comparable to that of expensive systems. Moreover, a statistical study was conducted to confirm the repeatability and reliability of the sensing system. The OLE-based system has strong potential as an inexpensive tool for motion capture and arm-function evaluation for short-term as well as long-term home-based monitoring.",
"title": ""
},
{
"docid": "45d60590eeb7983c5f449719e51dd628",
"text": "Directly adding the knowledge triples obtained from open information extraction systems into a knowledge base is often impractical due to a vocabulary gap between natural language (NL) expressions and knowledge base (KB) representation. This paper aims at learning to map relational phrases in triples from natural-language-like statement to knowledge base predicate format. We train a word representation model on a vector space and link each NL relational pattern to the semantically equivalent KB predicate. Our mapping result shows not only high quality, but also promising coverage on relational phrases compared to previous research.",
"title": ""
},
{
"docid": "8efe66661d6c1bb7e96c4c2cb2fbdeec",
"text": "IT Leader Sample SIC Code Control Sample SIC Code Consol Energy Inc 1220 Walter Energy Inc 1220 Halliburton Co 1389 Schlumberger Ltd 1389 Standard Pacific Corp 1531 M/I Homes Inc 1531 Beazer Homes USA Inc 1531 Hovnanian Entrprs Inc -Cl A 1531 Toll Brothers Inc 1531 MDC Holdings Inc 1531 D R Horton Inc 1531 Ryland Group Inc 1531 Lennar Corp 1531 KB Home 1531 Granite Construction Inc 1600 Empresas Ica Soc Ctl ADR 1600 Fluor Corp 1600 Alstom ADR 1600 Gold Kist Inc 2015 Sadia Sa ADR 2015 Kraft Foods Inc 2000 ConAgra Foods Inc 2000 Smithfield Foods Inc 2011 Hormel Foods Corp 2011 Campbell Soup Co 2030 Heinz (H J) Co 2030 General Mills Inc 2040 Kellogg Co 2040 Imperial Sugar Co 2060 Wrigley (Wm) Jr Co 2060 Hershey Co 2060 Tate & Lyle Plc ADR 2060 Molson Coors Brewing Co 2082 Comp Bebidas Americas ADR 2082 Constellation Brands Cl A 2084 Gruma S.A.B. de C.V. ADR B 2040 Brown-Forman Cl B 2085 Coca Cola Hellenic Bttlg ADR 2086",
"title": ""
},
{
"docid": "7b8dffab502fae2abbea65464e2727aa",
"text": "Bone tissue is continuously remodeled through the concerted actions of bone cells, which include bone resorption by osteoclasts and bone formation by osteoblasts, whereas osteocytes act as mechanosensors and orchestrators of the bone remodeling process. This process is under the control of local (e.g., growth factors and cytokines) and systemic (e.g., calcitonin and estrogens) factors that all together contribute for bone homeostasis. An imbalance between bone resorption and formation can result in bone diseases including osteoporosis. Recently, it has been recognized that, during bone remodeling, there are an intricate communication among bone cells. For instance, the coupling from bone resorption to bone formation is achieved by interaction between osteoclasts and osteoblasts. Moreover, osteocytes produce factors that influence osteoblast and osteoclast activities, whereas osteocyte apoptosis is followed by osteoclastic bone resorption. The increasing knowledge about the structure and functions of bone cells contributed to a better understanding of bone biology. It has been suggested that there is a complex communication between bone cells and other organs, indicating the dynamic nature of bone tissue. In this review, we discuss the current data about the structure and functions of bone cells and the factors that influence bone remodeling.",
"title": ""
},
{
"docid": "8f916f7be3048ae2a367096f4f82207d",
"text": "Existing methods for single document keyphrase extraction usually make use of only the information contained in the specified document. This paper proposes to use a small number of nearest neighbor documents to provide more knowledge to improve single document keyphrase extraction. A specified document is expanded to a small document set by adding a few neighbor documents close to the document, and the graph-based ranking algorithm is then applied on the expanded document set to make use of both the local information in the specified document and the global information in the neighbor documents. Experimental results demonstrate the good effectiveness and robustness of our proposed approach.",
"title": ""
},
{
"docid": "6fb868748f5c2ed6d8ae34721bc445eb",
"text": "Handling imbalanced datasets is a challenging problem that if not treated correctly results in reduced classification performance. Imbalanced datasets are commonly handled using minority oversampling, whereas the SMOTE algorithm is a successful oversampling algorithm with numerous extensions. SMOTE extensions do not have a theoretical guarantee during training to work better than SMOTE and in many instances their performance is data dependent. In this paper we propose a novel extension to the SMOTE algorithm with a theoretical guarantee for improved classification performance. The proposed approach considers the classification performance of both the majority and minority classes. In the proposed approach CGMOS (Certainty Guided Minority OverSampling) new data points are added by considering certainty changes in the dataset. The paper provides a proof that the proposed algorithm is guaranteed to work better than SMOTE for training data. Further, experimental results on 30 real-world datasets show that CGMOS works better than existing algorithms when using 6 different classifiers.",
"title": ""
},
{
"docid": "bf4d9bcadd48efcea886ea442077acb3",
"text": "Satellite remote sensing is a valuable tool for monitoring flooding. Microwave sensors are especially appropriate instruments, as they allow the differentiation of inundated from non-inundated areas, regardless of levels of solar illumination or frequency of cloud cover in regions experiencing substantial rainy seasons. In the current study we present the longest synthetic aperture radar-based time series of flood and inundation information derived for the Mekong Delta that has been analyzed for this region so far. We employed overall 60 Envisat ASAR Wide Swath Mode data sets at a spatial resolution of 150 meters acquired during the years 2007–2011 to facilitate a thorough understanding of the flood regime in the Mekong Delta. The Mekong Delta in southern Vietnam comprises 13 provinces and is home to 18 million inhabitants. Extreme dry seasons from late December to May and wet seasons from June to December characterize people’s rural life. In this study, we show which areas of the delta are frequently affected by floods and which regions remain dry all year round. Furthermore, we present which areas are flooded at which frequency and elucidate the patterns of flood progression over the course of the rainy season. In this context, we also examine the impact of dykes on floodwater emergence and assess the relationship between retrieved flood occurrence patterns and land use. In addition, the advantages and shortcomings of ENVISAT ASAR-WSM based flood mapping are discussed. The results contribute to a comprehensive understanding of Mekong Delta flood OPEN ACCESS Remote Sens. 2013, 5 688 dynamics in an environment where the flow regime is influenced by the Mekong River, overland water-flow, anthropogenic floodwater control, as well as the tides.",
"title": ""
},
{
"docid": "a3cfab5203348546d901e18ab4cc7c3a",
"text": "Most of neural language models use different kinds of embeddings for word prediction. While word embeddings can be associated to each word in the vocabulary or derived from characters as well as factored morphological decomposition, these word representations are mainly used to parametrize the input, i.e. the context of prediction. This work investigates the effect of using subword units (character and factored morphological decomposition) to build output representations for neural language modeling. We present a case study on Czech, a morphologically-rich language, experimenting with different input and output representations. When working with the full training vocabulary, despite unstable training, our experiments show that augmenting the output word representations with character-based embeddings can significantly improve the performance of the model. Moreover, reducing the size of the output look-up table, to let the character-based embeddings represent rare words, brings further improvement.",
"title": ""
},
{
"docid": "9361344286f994c8432f3f6bb0f1a86c",
"text": "Proper formulation of features plays an important role in shorttext classification tasks as the amount of text available is very little. In literature, Term Frequency Inverse Document Frequency (TF-IDF) is commonly used to create feature vectors for such tasks. However, TF-IDF formulation does not utilize the class information available in supervised learning. For classification problems, if it is possible to identify terms that can strongly distinguish among classes, then more weight can be given to those terms during feature construction phase. This may result in improved classifier performance with the incorporation of extra class label related information. We propose a supervised feature construction method to classify tweets, based on the actionable information that might be present, posted during different disaster scenarios. Improved classifier performance for such classification tasks can be helpful in the rescue and relief operations. We used three benchmark datasets containing tweets posted during Nepal and Italy earthquakes in 2015 and 2016 respectively. Experimental results show that the proposed method obtains better classification performance on these benchmark datasets.",
"title": ""
}
] | scidocsrr |
8ebdc8fee8a3c35cd03cb1a3c1bae8d1 | Novel Cellular Active Array Antenna System at Base Station for Beyond 4G | [
{
"docid": "cac379c00a4146acd06c446358c3e95a",
"text": "In this work, a new base station antenna is proposed. Two separate frequency bands with separate radiating elements are used in each band. The frequency band separation ratio is about 1.3:1. These elements are arranged with different spacing (wider spacing for the lower frequency band, and narrower spacing for the higher frequency band). Isolation between bands inherently exists in this approach. This avoids the grating lobe effect, and mitigates the beam narrowing (dispersion) seen with fixed element spacing covering the whole wide bandwidth. A new low-profile cross dipole is designed, which is integrated in the array with an EBG/AMC structure for reducing the size of low band elements and decreasing coupling at high band.",
"title": ""
},
{
"docid": "0dd462fa371d270a63e7ad88b070d8a2",
"text": "Currently, many operators worldwide are deploying Long Term Evolution (LTE) to provide much faster access with lower latency and higher efficiency than its predecessors 3G and 3.5G. Meanwhile, the service rollout of LTE-Advanced, which is an evolution of LTE and a “true 4G” mobile broadband, is being underway to further enhance LTE performance. However, the anticipated challenges of the next decade (2020s) are so tremendous and diverse that there is a vastly increased need for a new generation mobile communications system with even further enhanced capabilities and new functionalities, namely a fifth generation (5G) system. Envisioning the development of a 5G system by 2020, at DOCOMO we started studies on future radio access as early as 2010, just after the launch of LTE service. The aim at that time was to anticipate the future user needs and the requirements of 10 years later (2020s) in order to identify the right concept and radio access technologies for the next generation system. The identified 5G concept consists of an efficient integration of existing spectrum bands for current cellular mobile and future new spectrum bands including higher frequency bands, e.g., millimeter wave, with a set of spectrum specific and spectrum agnostic technologies. Since a few years ago, we have been conducting several proof-of-concept activities and investigations on our 5G concept and its key technologies, including the development of a 5G real-time simulator, experimental trials of a wide range of frequency bands and technologies and channel measurements for higher frequency bands. In this paper, we introduce an overview of our views on the requirements, concept and promising technologies for 5G radio access, in addition to our ongoing activities for paving the way toward the realization of 5G by 2020. key words: next generation mobile communications system, 5G, 4G, LTE, LTE-advanced",
"title": ""
}
] | [
{
"docid": "6660bcfd564726421d9eaaa696549454",
"text": "When building intelligent spaces, the knowledge representation for encapsulating rooms, users, groups, roles, and other information is a fundamental design question. We present a semantic network as such a representation, and demonstrate its utility as a basis for ongoing work.",
"title": ""
},
{
"docid": "d13ecf582ac820cdb8ea6353c44c535f",
"text": "We have previously shown that, while the intrinsic quality of the oocyte is the main factor affecting blastocyst yield during bovine embryo development in vitro, the main factor affecting the quality of the blastocyst is the postfertilization culture conditions. Therefore, any improvement in the quality of blastocysts produced in vitro is likely to derive from the modification of the postfertilization culture conditions. The objective of this study was to examine the effect of the presence or absence of serum and the concentration of BSA during the period of embryo culture in vitro on 1) cleavage rate, 2) the kinetics of embryo development, 3) blastocyst yield, and 4) blastocyst quality, as assessed by cryotolerance and gene expression patterns. The quantification of all gene transcripts was carried out by real-time quantitative reverse transcription-polymerase chain reaction. Bovine blastocysts from four sources were used: 1) in vitro culture in synthetic oviduct fluid (SOF) supplemented with 3 mg/ml BSA and 10% fetal calf serum (FCS), 2) in vitro culture in SOF + 3 mg/ml BSA in the absence of serum, 3) in vitro culture in SOF + 16 mg/ml BSA in the absence of serum, and 4) in vivo blastocysts. There was no difference in overall blastocyst yield at Day 9 between the groups. However, significantly more blastocysts were present by Day 6 in the presence of 10% serum (20.0%) compared with 3 mg/ml BSA (4.6%, P < 0.001) or 16 mg/ml BSA (11.6%, P < 0.01). By Day 7, however, this difference had disappeared. Following vitrification, there was no difference in survival between blastocysts produced in the presence of 16 mg/ml BSA or those produced in the presence of 10% FCS; the survival of both groups was significantly lower than the in vivo controls at all time points and in terms of hatching rate. In contrast, survival of blastocysts produced in SOF + 3 mg/ml BSA in the absence of serum was intermediate, with no difference remaining at 72 h when compared with in vivo embryos. Differences in relative mRNA abundance among the two groups of blastocysts analyzed were found for genes related to apoptosis (Bax), oxidative stress (MnSOD, CuZnSOD, and SOX), communication through gap junctions (Cx31 and Cx43), maternal recognition of pregnancy (IFN-tau), and differentiation and implantation (LIF and LR-beta). The presence of serum during the culture period resulted in a significant increase in the level of expression of MnSOD, SOX, Bax, LIF, and LR-beta. The level of expression of Cx31 and Cu/ZnSOD also tended to be increased, although the difference was not significant. In contrast, the level of expression of Cx43 and IFN-tau was decreased in the presence of serum. In conclusion, using a combination of measures of developmental competence (cleavage and blastocyst rates) and qualitative measures such as cryotolerance and relative mRNA abundance to give a more complete picture of the consequences of modifying medium composition on the embryo, we have shown that conditions of postfertilization culture, in particular, the presence of serum in the medium, can affect the speed of embryo development and the quality of the resulting blastocysts. The reduced cryotolerance of blastocysts generated in the presence of serum is accompanied by deviations in the relative abundance of developmentally important gene transcripts. Omission of serum during the postfertilization culture period can significantly improve the cryotolerance of the blastocysts to a level intermediate between serum-generated blastocysts and those derived in vivo. The challenge now is to try and bridge this gap.",
"title": ""
},
{
"docid": "ba96f2099e6e44ad14b85bfc2b49ddff",
"text": "In this paper, an improved multimodel optimal quadratic control structure for variable speed, pitch regulated wind turbines (operating at high wind speeds) is proposed in order to integrate high levels of wind power to actively provide a primary reserve for frequency control. On the basis of the nonlinear model of the studied plant, and taking into account the wind speed fluctuations, and the electrical power variation, a multimodel linear description is derived for the wind turbine, and is used for the synthesis of an optimal control law involving a state feedback, an integral action and an output reference model. This new control structure allows a rapid transition of the wind turbine generated power between different desired set values. This electrical power tracking is ensured with a high-performance behavior for all other state variables: turbine and generator rotational speeds and mechanical shaft torque; and smooth and adequate evolution of the control variables.",
"title": ""
},
{
"docid": "0aab0c0fa6a1b0f283478b390dece614",
"text": "Hydrokinetic turbines can provide a source of electricity for remote areas located near a river or stream. The objective of this paper is to describe the design, simulation, build, and testing of a novel hydrokinetic turbine. The main components of the system are a permanent magnet synchronous generator (PMSG), a machined H-Darrieus rotor, an embedded controls system, and a cataraft. The design and construction of this device was conducted at the Oregon Institute of Technology in Wilsonville, Oregon.",
"title": ""
},
{
"docid": "a1d0bf0d28bbe3dd568e7e01bc9d59c3",
"text": "A novel coupling technique for circularly polarized annular-ring patch antenna is developed and discussed. The circular polarization (CP) radiation of the annular-ring patch antenna is achieved by a simple microstrip feed line through the coupling of a fan-shaped patch on the same plane of the antenna. Proper positioning of the coupling fan-shaped patch excites two orthogonal resonant modes with 90 phase difference, and a pure circular polarization is obtained. The dielectric material is a cylindrical block of ceramic with a permittivity of 25 and that reduces the size of the antenna. The prototype has been designed and fabricated and found to have an impedance bandwidth of 2.3% and a 3 dB axial-ratio bandwidth of about 0.6% at the center frequency of 2700 MHz. The characteristics of the proposed antenna have been by simulation software HFSS and experiment. The measured and simulated results are in good agreement.",
"title": ""
},
{
"docid": "6b718717d5ecef343a8f8033803a55e6",
"text": "BACKGROUND\nMedication and adverse drug event (ADE) information extracted from electronic health record (EHR) notes can be a rich resource for drug safety surveillance. Existing observational studies have mainly relied on structured EHR data to obtain ADE information; however, ADEs are often buried in the EHR narratives and not recorded in structured data.\n\n\nOBJECTIVE\nTo unlock ADE-related information from EHR narratives, there is a need to extract relevant entities and identify relations among them. In this study, we focus on relation identification. This study aimed to evaluate natural language processing and machine learning approaches using the expert-annotated medical entities and relations in the context of drug safety surveillance, and investigate how different learning approaches perform under different configurations.\n\n\nMETHODS\nWe have manually annotated 791 EHR notes with 9 named entities (eg, medication, indication, severity, and ADEs) and 7 different types of relations (eg, medication-dosage, medication-ADE, and severity-ADE). Then, we explored 3 supervised machine learning systems for relation identification: (1) a support vector machines (SVM) system, (2) an end-to-end deep neural network system, and (3) a supervised descriptive rule induction baseline system. For the neural network system, we exploited the state-of-the-art recurrent neural network (RNN) and attention models. We report the performance by macro-averaged precision, recall, and F1-score across the relation types.\n\n\nRESULTS\nOur results show that the SVM model achieved the best average F1-score of 89.1% on test data, outperforming the long short-term memory (LSTM) model with attention (F1-score of 65.72%) as well as the rule induction baseline system (F1-score of 7.47%) by a large margin. The bidirectional LSTM model with attention achieved the best performance among different RNN models. With the inclusion of additional features in the LSTM model, its performance can be boosted to an average F1-score of 77.35%.\n\n\nCONCLUSIONS\nIt shows that classical learning models (SVM) remains advantageous over deep learning models (RNN variants) for clinical relation identification, especially for long-distance intersentential relations. However, RNNs demonstrate a great potential of significant improvement if more training data become available. Our work is an important step toward mining EHRs to improve the efficacy of drug safety surveillance. Most importantly, the annotated data used in this study will be made publicly available, which will further promote drug safety research in the community.",
"title": ""
},
{
"docid": "afbd0ecad829246ed7d6e1ebcebf5815",
"text": "Battery thermal management system (BTMS) is essential for electric-vehicle (EV) and hybrid-vehicle (HV) battery packs to operate effectively in all climates. Lithium-ion (Li-ion) batteries offer many advantages to the EV such as high power and high specific energy. However, temperature affects their performance, safety, and productive life. This paper is about the design and evaluation of a BTMS based on the Peltier effect heat pumps. The discharge efficiency of a 60-Ah prismatic Li-ion pouch cell was measured under different rates and different ambient temperature values. The obtained results were used to design a solid-state BTMS based on Peltier thermoelectric coolers (TECs). The proposed BTMS is then modeled and evaluated at constant current discharge in the laboratory. In addition, The BTMS was installed in an EV that was driven in the US06 cycle. The thermal response and the energy consumption of the proposed BTMS were satisfactory.",
"title": ""
},
{
"docid": "6f5ada16b55afc21f7291f7764ec85ee",
"text": "Breast cancer is often treated with radiotherapy (RT), with two opposing tangential fields. When indicated, supraclavicular lymph nodes have to be irradiated, and a third anterior field is applied. The junction region has the potential to be over or underdosed. To overcome this problem, many techniques have been proposed. A literature review of 3 Dimensional Conformal RT (3D CRT) and older 3-field techniques was carried out. Intensity Modulated RT (IMRT) techniques are also briefly discussed. Techniques are categorized, few characteristic examples are presented and a comparison is attempted. Three-field techniques can be divided in monoisocentric and two-isocentric. Two-isocentric techniques can be further divided in full field and half field techniques. Monoisocentric techniques show certain great advantages over two-isocentric techniques. However, they are not always applicable and they require extra caution as they are characterized by high dose gradient in the junction region. IMRT has been proved to give better dosimetric results. Three-field matching is a complicated procedure, with potential of over or undredosage in the junction region. Many techniques have been proposed, each with advantages and disadvantages. Among them, monoisocentric techniques, when carefully applied, are the ideal choice, provided IMRT facility is not available. Otherwise, a two-isocentric half beam technique is recommended.",
"title": ""
},
{
"docid": "601ffeb412bac0baa6fdb6da7a4a9a42",
"text": "CLCWeb: Comparative Literature and Culture, the peer-reviewed, full-text, and open-access learned journal in the humanities and social sciences, publishes new scholarship following tenets of the discipline of comparative literature and the field of cultural studies designated as \"comparative cultural studies.\" Publications in the journal are indexed in the Annual Bibliography of English Language and Literature (Chadwyck-Healey), the Arts and Humanities Citation Index (Thomson Reuters ISI), the Humanities Index (Wilson), Humanities International Complete (EBSCO), the International Bibliography of the Modern Language Association of America, and Scopus (Elsevier). The journal is affiliated with the Purdue University Press monograph series of Books in Comparative Cultural Studies. Contact: <[email protected]>",
"title": ""
},
{
"docid": "36fef38de53386e071ee2a1996aa733f",
"text": "Knowledge embedding, which projects triples in a given knowledge base to d-dimensional vectors, has attracted considerable research efforts recently. Most existing approaches treat the given knowledge base as a set of triplets, each of whose representation is then learned separately. However, as a fact, triples are connected and depend on each other. In this paper, we propose a graph aware knowledge embedding method (GAKE), which formulates knowledge base as a directed graph, and learns representations for any vertices or edges by leveraging the graph’s structural information. We introduce three types of graph context for embedding: neighbor context, path context, and edge context, each reflects properties of knowledge from different perspectives. We also design an attention mechanism to learn representative power of different vertices or edges. To validate our method, we conduct several experiments on two tasks. Experimental results suggest that our method outperforms several state-of-art knowledge embedding models.",
"title": ""
},
{
"docid": "8589ec481e78d14fbeb3e6e4205eee50",
"text": "This paper presents a novel ensemble classifier generation technique RotBoost, which is constructed by combining Rotation Forest and AdaBoost. The experiments conducted with 36 real-world data sets available from the UCI repository, among which a classification tree is adopted as the base learning algorithm, demonstrate that RotBoost can generate ensemble classifiers with significantly lower prediction error than either Rotation Forest or AdaBoost more often than the reverse. Meanwhile, RotBoost is found to perform much better than Bagging and MultiBoost. Through employing the bias and variance decompositions of error to gain more insight of the considered classification methods, RotBoost is seen to simultaneously reduce the bias and variance terms of a single tree and the decrement achieved by it is much greater than that done by the other ensemble methods, which leads RotBoost to perform best among the considered classification procedures. Furthermore, RotBoost has a potential advantage over AdaBoost of suiting parallel execution. 2008 Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "fd70fff204201c33ed3d901c48560980",
"text": "I n the early 1960s, the average American adult male weighed 168 pounds. Today, he weighs nearly 180 pounds. Over the same time period, the average female adult weight rose from 143 pounds to over 155 pounds (U.S. Department of Health and Human Services, 1977, 1996). In the early 1970s, 14 percent of the population was classified as medically obese. Today, obesity rates are two times higher (Centers for Disease Control, 2003). Weights have been rising in the United States throughout the twentieth century, but the rise in obesity since 1980 is fundamentally different from past changes. For most of the twentieth century, weights were below levels recommended for maximum longevity (Fogel, 1994), and the increase in weight represented an increase in health, not a decrease. Today, Americans are fatter than medical science recommends, and weights are still increasing. While many other countries have experienced significant increases in obesity, no other developed country is quite as heavy as the United States. What explains this growth in obesity? Why is obesity higher in the United States than in any other developed country? The available evidence suggests that calories expended have not changed significantly since 1980, while calories consumed have risen markedly. But these facts just push the puzzle back a step: why has there been an increase in calories consumed? We propose a theory based on the division of labor in food preparation. In the 1960s, the bulk of food preparation was done by families that cooked their own food and ate it at home. Since then, there has been a revolution in the mass preparation of food that is roughly comparable to the mass",
"title": ""
},
{
"docid": "3cf174505ecd647930d762327fc7feb6",
"text": "The purpose of the present study was to examine the relationship between workplace friendship and social loafing effect among employees in Certified Public Accounting (CPA) firms. Previous studies showed that workplace friendship has both positive and negative effects, meaning that there is an inconsistent relationship between workplace friendship and social loafing. The present study investigated the correlation between workplace friendship and social loafing effect among employees from CPA firms in Taiwan. The study results revealed that there was a negative relationship between workplace friendship and social loafing effect among CPA employees. In other words, the better the workplace friendship, the lower the social loafing effect. An individual would not put less effort in work when there was a low social loafing effect.",
"title": ""
},
{
"docid": "b4d5bfc26bac32e1e1db063c3696540a",
"text": "Symmetric positive semidefinite (SPSD) matrix approximation is an important problem with applications in kernel methods. However, existing SPSD matrix approximation methods such as the Nyström method only have weak error bounds. In this paper we conduct in-depth studies of an SPSD matrix approximation model and establish strong relative-error bounds. We call it the prototype model for it has more efficient and effective extensions, and some of its extensions have high scalability. Though the prototype model itself is not suitable for large-scale data, it is still useful to study its properties, on which the analysis of its extensions relies. This paper offers novel theoretical analysis, efficient algorithms, and a highly accurate extension. First, we establish a lower error bound for the prototype model, and we improve the error bound of an existing column selection algorithm to match the lower bound. In this way, we obtain the first optimal column selection algorithm for the prototype model. We also prove that the prototype model is exact under certain conditions. Second, we develop a simple column selection algorithm with a provable error bound. Third, we propose a socalled spectral shifting model to make the approximation more accurate when the spectrum of the matrix decays slowly, and the improvement is theoretically quantified. The spectral shifting method can also be applied to improve other SPSD matrix approximation models.",
"title": ""
},
{
"docid": "0a143c2d4af3cc726964a90927556399",
"text": "Humans prefer to interact with each other using speech. Since this is the most natural mode of communication, the humans also want to interact with machines using speech only. So, automatic speech recognition has gained a lot of popularity. Different approaches for speech recognition exists like Hidden Markov Model (HMM), Dynamic Time Warping (DTW), Vector Quantization (VQ), etc. This paper uses Neural Network (NN) along with Mel Frequency Cepstrum Coefficients (MFCC) for speech recognition. Mel Frequency Cepstrum Coefiicients (MFCC) has been used for the feature extraction of speech. This gives the feature of the waveform. For pattern matching FeedForward Neural Network with Back propagation algorithm has been applied. The paper analyzes the various training algorithms present for training the Neural Network and uses train scg for the experiment. The work has been done on MATLAB and experimental results show that system is able to recognize words at sufficiently high accuracy.",
"title": ""
},
{
"docid": "c2553e6256ef130fbd5bc0029bb5e7b7",
"text": "Using Blockchain seems a promising approach for Business Process Reengineering (BPR) to alleviate trust issues among stakeholders, by providing decentralization, transparency, traceability, and immutability of information along with its business logic. However, little work seems to be available on utilizing Blockchain for supporting BPR in a systematic and rational way, potentially leading to disappointments and even doubts on the utility of Blockchain. In this paper, as ongoing research, we outline Fides - a framework for exploiting Blockchain towards enhancing the trustworthiness for BPR. Fides supports diagnosing trust issues with AS-IS business processes, exploring TO-BE business process alternatives using Blockchain, and selecting among the alternatives. A business process of a retail chain for a food supply chain is used throughout the paper to illustrate Fides concepts.",
"title": ""
},
{
"docid": "562ec4c39f0d059fbb9159ecdecd0358",
"text": "In this paper, we propose the factorized hidden layer FHL approach to adapt the deep neural network DNN acoustic models for automatic speech recognition ASR. FHL aims at modeling speaker dependent SD hidden layers by representing an SD affine transformation as a linear combination of bases. The combination weights are low-dimensional speaker parameters that can be initialized using speaker representations like i-vectors and then reliably refined in an unsupervised adaptation fashion. Therefore, our method provides an efficient way to perform both adaptive training and test-time adaptation. Experimental results have shown that the FHL adaptation improves the ASR performance significantly, compared to the standard DNN models, as well as other state-of-the-art DNN adaptation approaches, such as training with the speaker-normalized CMLLR features, speaker-aware training using i-vector and learning hidden unit contributions LHUC. For Aurora 4, FHL achieves 3.8% and 2.3% absolute improvements over the standard DNNs trained on the LDA + STC and CMLLR features, respectively. It also achieves 1.7% absolute performance improvement over a system that combines the i-vector adaptive training with LHUC adaptation. For the AMI dataset, FHL achieved 1.4% and 1.9% absolute improvements over the sequence-trained CMLLR baseline systems, for the IHM and SDM tasks, respectively.",
"title": ""
},
{
"docid": "8d9cae70a7334afcd558c0fa850d551a",
"text": "A popular approach to solving large probabilistic systems relies on aggregating states based on a measure of similarity. Many approaches in the literature are heuristic. A number of recent methods rely instead on metrics based on the notion of bisimulation, or behavioral equivalence between states (Givan et al., 2003; Ferns et al., 2004). An integral component of such metrics is the Kantorovich metric between probability distributions. However, while this metric enables many satisfying theoretical properties, it is costly to compute in practice. In this paper, we use techniques from network optimization and statistical sampling to overcome this problem. We obtain in this manner a variety of distance functions for MDP state aggregation that differ in the tradeoff between time and space complexity, as well as the quality of the aggregation. We provide an empirical evaluation of these tradeoffs.",
"title": ""
},
{
"docid": "e22564e88d82b91e266b0a118bd2ec91",
"text": "Non-lethal dose of 70% ethanol extract of the Nerium oleander dry leaves (1000 mg/kg body weight) was subcutaneously injected into male and female mice once a week for 9 weeks (total 10 doses). One day after the last injection, final body weight gain (relative percentage to the initial body weight) had a tendency, in both males and females, towards depression suggesting a metabolic insult at other sites than those involved in myocardial function. Multiple exposure of the mice to the specified dose failed to express a significant influence on blood parameters (WBC, RBC, Hb, HCT, PLT) as well as myocardium. On the other hand, a lethal dose (4000 mg/kg body weight) was capable of inducing progressive changes in myocardial electrical activity ending up in cardiac arrest. The electrocardiogram abnormalities could be brought about by the expected Na+, K(+)-ATPase inhibition by the cardiac glycosides (cardenolides) content of the lethal dose.",
"title": ""
},
{
"docid": "3b64e99ea608819fc4bf06a6850a5aff",
"text": "Cloud computing is one of the most useful technology that is been widely used all over the world. It generally provides on demand IT services and products. Virtualization plays a major role in cloud computing as it provides a virtual storage and computing services to the cloud clients which is only possible through virtualization. Cloud computing is a new business computing paradigm that is based on the concepts of virtualization, multi-tenancy, and shared infrastructure. This paper discusses about cloud computing, how virtualization is done in cloud computing, virtualization basic architecture, its advantages and effects [1].",
"title": ""
}
] | scidocsrr |
2b59c3f8ca29f7ebafd26cf004517e8c | Chainsaw: Chained Automated Workflow-based Exploit Generation | [
{
"docid": "d0eb7de87f3d6ed3fd6c34a1f0ce47a1",
"text": "STRANGER is an automata-based string analysis tool for finding and eliminating string-related security vulnerabilities in P H applications. STRANGER uses symbolic forward and backward reachability analyses t o compute the possible values that the string expressions can take during progr am execution. STRANGER can automatically (1) prove that an application is free from specified attacks or (2) generate vulnerability signatures that c racterize all malicious inputs that can be used to generate attacks.",
"title": ""
}
] | [
{
"docid": "279c377e12cdb8aec7242e0e9da2dd26",
"text": "It is well accepted that pain is a multidimensional experience, but little is known of how the brain represents these dimensions. We used positron emission tomography (PET) to indirectly measure pain-evoked cerebral activity before and after hypnotic suggestions were given to modulate the perceived intensity of a painful stimulus. These techniques were similar to those of a previous study in which we gave suggestions to modulate the perceived unpleasantness of a noxious stimulus. Ten volunteers were scanned while tonic warm and noxious heat stimuli were presented to the hand during four experimental conditions: alert control, hypnosis control, hypnotic suggestions for increased-pain intensity and hypnotic suggestions for decreased-pain intensity. As shown in previous brain imaging studies, noxious thermal stimuli presented during the alert and hypnosis-control conditions reliably activated contralateral structures, including primary somatosensory cortex (S1), secondary somatosensory cortex (S2), anterior cingulate cortex, and insular cortex. Hypnotic modulation of the intensity of the pain sensation led to significant changes in pain-evoked activity within S1 in contrast to our previous study in which specific modulation of pain unpleasantness (affect), independent of pain intensity, produced specific changes within the ACC. This double dissociation of cortical modulation indicates a relative specialization of the sensory and the classical limbic cortical areas in the processing of the sensory and affective dimensions of pain.",
"title": ""
},
{
"docid": "da7f869037f40ab8666009d85d9540ff",
"text": "A boomerang-shaped alar base excision is described to narrow the nasal base and correct the excessive alar flare. The boomerang excision combined the external alar wedge resection with an internal vestibular floor excision. The internal excision was inclined 30 to 45 degrees laterally to form the inner limb of the boomerang. The study included 46 patients presenting with wide nasal base and excessive alar flaring. All cases were followed for a mean period of 18 months (range, 8 to 36 months). The laterally oriented vestibular floor excision allowed for maximum preservation of the natural curvature of the alar rim where it meets the nostril floor and upon its closure resulted in a considerable medialization of alar lobule, which significantly reduced the amount of alar flare and the amount of external alar excision needed. This external alar excision measured, on average, 3.8 mm (range, 2 to 8 mm), which is significantly less than that needed when a standard vertical internal excision was used ( P < 0.0001). Such conservative external excisions eliminated the risk of obliterating the natural alar-facial crease, which did not occur in any of our cases. No cases of postoperative bleeding, infection, or vestibular stenosis were encountered. Keloid or hypertrophic scar formation was not encountered; however, dermabrasion of the scars was needed in three (6.5%) cases to eliminate apparent suture track marks. The boomerang alar base excision proved to be a safe and effective technique for narrowing the nasal base and elimination of the excessive flaring and resulted in a natural, well-proportioned nasal base with no obvious scarring.",
"title": ""
},
{
"docid": "9a0b6db90dc15e04f4b860e4355996f2",
"text": "This work discusses a mix of challenges arising from Watson Discovery Advisor (WDA), an industrial strength descendant of the Watson Jeopardy! Question Answering system currently used in production in industry settings. Typical challenges include generation of appropriate training questions, adaptation to new industry domains, and iterative improvement of the system through manual error analyses.",
"title": ""
},
{
"docid": "cac081006bb1a7daefe3c62b6c80fe10",
"text": "A novel chaotic time-series prediction method based on support vector machines (SVMs) and echo-state mechanisms is proposed. The basic idea is replacing \"kernel trick\" with \"reservoir trick\" in dealing with nonlinearity, that is, performing linear support vector regression (SVR) in the high-dimension \"reservoir\" state space, and the solution benefits from the advantages from structural risk minimization principle, and we call it support vector echo-state machines (SVESMs). SVESMs belong to a special kind of recurrent neural networks (RNNs) with convex objective function, and their solution is global, optimal, and unique. SVESMs are especially efficient in dealing with real life nonlinear time series, and its generalization ability and robustness are obtained by regularization operator and robust loss function. The method is tested on the benchmark prediction problem of Mackey-Glass time series and applied to some real life time series such as monthly sunspots time series and runoff time series of the Yellow River, and the prediction results are promising",
"title": ""
},
{
"docid": "1e18f23ad8ddc4333406c4703d51d92b",
"text": "from its introductory beginning and across its 446 pages, centered around the notion that computer simulations and games are not at all disparate but very much aligning concepts. This not only makes for an interesting premise but also an engaging book overall which offers a resource into an educational subject (for it is educational simulations that the authors predominantly address) which is not overly saturated. The aim of the book as a result of this decision, which is explained early on, but also because of its subsequent structure, is to enlighten its intended audience in the way that effective and successful simulations/games operate (on a theoretical/conceptual and technical level, although in the case of the latter the book intentionally never delves into the realms of software programming specifics per se), can be designed, built and, finally, evaluated. The book is structured in three different and distinct parts, with four chapters in the first, six chapters in the second and six chapters in the third and final one. The first chapter is essentially a \" teaser \" , according to the authors. There are a couple of more traditional simulations described, a couple of well-known mainstream games (Mario Kart and Portal 2, interesting choices, especially the first one) and then the authors proceed to present applications which show the simulation and game convergence. These applications have a strong educational outlook (covering on this occasion very diverse topics, from flood prevention to drink driving awareness, amongst others). This chapter works very well in initiating the audience in the subject matter and drawing the necessary parallels. With all of the simula-tions/games/educational applications included BOOK REVIEW",
"title": ""
},
{
"docid": "9593712906aa8272716a7fe5b482b91d",
"text": "User stories are a widely used notation for formulating requirements in agile development projects. Despite their popularity in industry, little to no academic work is available on assessing their quality. The few existing approaches are too generic or employ highly qualitative metrics. We propose the Quality User Story Framework, consisting of 14 quality criteria that user story writers should strive to conform to. Additionally, we introduce the conceptual model of a user story, which we rely on to design the AQUSA software tool. AQUSA aids requirements engineers in turning raw user stories into higher-quality ones by exposing defects and deviations from good practice in user stories. We evaluate our work by applying the framework and a prototype implementation to three user story sets from industry.",
"title": ""
},
{
"docid": "511991822f427c3f62a4c091594e89e3",
"text": "Reinforcement learning has recently gained popularity due to its many successful applications in various fields. In this project reinforcement learning is implemented in a simple warehouse situation where robots have to learn to interact with each other while performing specific tasks. The aim is to study whether reinforcement learning can be used to train multiple agents. Two different methods have been used to achieve this aim, Q-learning and deep Q-learning. Due to practical constraints, this paper cannot provide a comprehensive review of real life robot interactions. Both methods are tested on single-agent and multi-agent models in Python computer simulations. The results show that the deep Q-learning model performed better in the multiagent simulations than the Q-learning model and it was proven that agents can learn to perform their tasks to some degree. Although, the outcome of this project cannot yet be considered sufficient for moving the simulation into reallife, it was concluded that reinforcement learning and deep learning methods can be seen as suitable for modelling warehouse robots and their interactions.",
"title": ""
},
{
"docid": "a6097c9898acd91feac6792251e77285",
"text": "Pregabalin is a substance which modulates monoamine release in \"hyper-excited\" neurons. It binds potently to the α2-δ subunit of calcium channels. Pilotstudies on alcohol- and benzodiazepine dependent patients reported a reduction of withdrawal symptoms through Pregabalin. To our knowledge, no studies have been conducted so far assessing this effect in opiate dependent patients. We report the case of a 43-year-old patient with Pregabalin intake during opiate withdrawal. Multiple inpatient and outpatient detoxifications from maintenance replacement therapy with Buprenorphine in order to reach complete abstinence did not show success because of extended withdrawal symptoms and repeated drug intake. Finally he disrupted his heroine intake with a simultaneously self administration of 300 mg Pregabaline per day and was able to control the withdrawal symptoms. In this time we did control the Pregabalin level in serum and urine in our outpatient clinic. In the course the patient reported that he could treat further relapse with opiate or opioids with Pregabalin successful. This case shows first details for Pregabalin to relief withdrawal symptoms in opiate withdrawal.",
"title": ""
},
{
"docid": "2eba092d19cc8fb35994e045f826e950",
"text": "Deep neural networks have proven to be particularly eective in visual and audio recognition tasks. Existing models tend to be computationally expensive and memory intensive, however, and so methods for hardwareoriented approximation have become a hot topic. Research has shown that custom hardware-based neural network accelerators can surpass their general-purpose processor equivalents in terms of both throughput and energy eciency. Application-tailored accelerators, when co-designed with approximation-based network training methods, transform large, dense and computationally expensive networks into small, sparse and hardware-ecient alternatives, increasing the feasibility of network deployment. In this article, we provide a comprehensive evaluation of approximation methods for high-performance network inference along with in-depth discussion of their eectiveness for custom hardware implementation. We also include proposals for future research based on a thorough analysis of current trends. is article represents the rst survey providing detailed comparisons of custom hardware accelerators featuring approximation for both convolutional and recurrent neural networks, through which we hope to inspire exciting new developments in the eld.",
"title": ""
},
{
"docid": "5a573ae9fad163c6dfe225f59b246b7f",
"text": "The sharp increase of plastic wastes results in great social and environmental pressures, and recycling, as an effective way currently available to reduce the negative impacts of plastic wastes, represents one of the most dynamic areas in the plastics industry today. Froth flotation is a promising method to solve the key problem of recycling process, namely separation of plastic mixtures. This review surveys recent literature on plastics flotation, focusing on specific features compared to ores flotation, strategies, methods and principles, flotation equipments, and current challenges. In terms of separation methods, plastics flotation is divided into gamma flotation, adsorption of reagents, surface modification and physical regulation.",
"title": ""
},
{
"docid": "b999fe9bd7147ef9c555131d106ea43e",
"text": "This paper presents the DeepCD framework which learns a pair of complementary descriptors jointly for image patch representation by employing deep learning techniques. It can be achieved by taking any descriptor learning architecture for learning a leading descriptor and augmenting the architecture with an additional network stream for learning a complementary descriptor. To enforce the complementary property, a new network layer, called data-dependent modulation (DDM) layer, is introduced for adaptively learning the augmented network stream with the emphasis on the training data that are not well handled by the leading stream. By optimizing the proposed joint loss function with late fusion, the obtained descriptors are complementary to each other and their fusion improves performance. Experiments on several problems and datasets show that the proposed method1 is simple yet effective, outperforming state-of-the-art methods.",
"title": ""
},
{
"docid": "82e5d8a3ee664f36afec3aa1b2e976f9",
"text": "Real-world tasks are often highly structured. Hierarchical reinforcement learning (HRL) has attracted research interest as an approach for leveraging the hierarchical structure of a given task in reinforcement learning (RL). However, identifying the hierarchical policy structure that enhances the performance of RL is not a trivial task. In this paper, we propose an HRL method that learns a latent variable of a hierarchical policy using mutual information maximization. Our approach can be interpreted as a way to learn a discrete and latent representation of the state-action space. To learn option policies that correspond to modes of the advantage function, we introduce advantage-weighted importance sampling. In our HRL method, the gating policy learns to select option policies based on an option-value function, and these option policies are optimized based on the deterministic policy gradient method. This framework is derived by leveraging the analogy between a monolithic policy in standard RL and a hierarchical policy in HRL by using a deterministic option policy. Experimental results indicate that our HRL approach can learn a diversity of options and that it can enhance the performance of RL in continuous control tasks.",
"title": ""
},
{
"docid": "44017678b3da8c8f4271a9832280201e",
"text": "Data warehouses are users driven; that is, they allow end-users to be in control of the data. As user satisfaction is commonly acknowledged as the most useful measurement of system success, we identify the underlying factors of end-user satisfaction with data warehouses and develop an instrument to measure these factors. The study demonstrates that most of the items in classic end-user satisfaction measure are still valid in the data warehouse environment, and that end-user satisfaction with data warehouses depends heavily on the roles and performance of organizational information centers. # 2000 Elsevier Science B.V. All rights reserved.",
"title": ""
},
{
"docid": "d97669811124f3c6f4cef5b2a144a46c",
"text": "Relational databases are queried using database query languages such as SQL. Natural language interfaces to databases (NLIDB) are systems that translate a natural language sentence into a database query. In this modern techno-crazy world, as more and more laymen access various systems and applications through their smart phones and tablets, the need for Natural Language Interfaces (NLIs) has increased manifold. The challenges in Natural language Query processing are interpreting the sentence correctly, removal of various ambiguity and mapping to the appropriate context. Natural language access problem is actually composed of two stages Linguistic processing and Database processing. NLIDB techniques encompass a wide variety of approaches. The approaches include traditional methods such as Pattern Matching, Syntactic Parsing and Semantic Grammar to modern systems such as Intermediate Query Generation, Machine Learning and Ontologies. In this report, various approaches to build NLIDB systems have been analyzed and compared along with their advantages, disadvantages and application areas. Also, a natural language interface to a flight reservation system has been implemented comprising of flight and booking inquiry systems.",
"title": ""
},
{
"docid": "c17e6363762e0e9683b51c0704d43fa7",
"text": "Your use of the JSTOR archive indicates your acceptance of JSTOR's Terms and Conditions of Use, available at http://www.jstor.org/about/terms.html. JSTOR's Terms and Conditions of Use provides, in part, that unless you have obtained prior permission, you may not download an entire issue of a journal or multiple copies of articles, and you may use content in the JSTOR archive only for your personal, non-commercial use.",
"title": ""
},
{
"docid": "058340d519ade55db4d6db879df95253",
"text": "Despite decades of research attempting to establish conversational interaction between humans and computers, the capabilities of automated conversational systems are still limited. In this paper, we introduce Chorus, a crowd-powered conversational assistant. When using Chorus, end users converse continuously with what appears to be a single conversational partner. Behind the scenes, Chorus leverages multiple crowd workers to propose and vote on responses. A shared memory space helps the dynamic crowd workforce maintain consistency, and a game-theoretic incentive mechanism helps to balance their efforts between proposing and voting. Studies with 12 end users and 100 crowd workers demonstrate that Chorus can provide accurate, topical responses, answering nearly 93% of user queries appropriately, and staying on-topic in over 95% of responses. We also observed that Chorus has advantages over pairing an end user with a single crowd worker and end users completing their own tasks in terms of speed, quality, and breadth of assistance. Chorus demonstrates a new future in which conversational assistants are made usable in the real world by combining human and machine intelligence, and may enable a useful new way of interacting with the crowds powering other systems.",
"title": ""
},
{
"docid": "d145bad318d074f036cf1aa1a49066b8",
"text": "Based on imbalanced data, the predictive models for 5year survivability of breast cancer using decision tree are proposed. After data preprocessing from SEER breast cancer datasets, it is obviously that the category of data distribution is imbalanced. Under-sampling is taken to make up the disadvantage of the performance of models caused by the imbalanced data. The performance of the models is evaluated by AUC under ROC curve, accuracy, specificity and sensitivity with 10-fold stratified cross-validation. The performance of models is best while the distribution of data is approximately equal. Bagging algorithm is used to build an integration decision tree model for predicting breast cancer survivability. Keywords-imbalanced data;decision tree;predictive breast cancer survivability;10-fold stratified cross-validation;bagging algorithm",
"title": ""
},
{
"docid": "406e06e00799733c517aff88c9c85e0b",
"text": "Matrix rank minimization problem is in general NP-hard. The nuclear norm is used to substitute the rank function in many recent studies. Nevertheless, the nuclear norm approximation adds all singular values together and the approximation error may depend heavily on the magnitudes of singular values. This might restrict its capability in dealing with many practical problems. In this paper, an arctangent function is used as a tighter approximation to the rank function. We use it on the challenging subspace clustering problem. For this nonconvex minimization problem, we develop an effective optimization procedure based on a type of augmented Lagrange multipliers (ALM) method. Extensive experiments on face clustering and motion segmentation show that the proposed method is effective for rank approximation.",
"title": ""
},
{
"docid": "1c78424b85b5ffd29e04e34639548bc8",
"text": "Datasets in the LOD cloud are far from being static in their nature and how they are exposed. As resources are added and new links are set, applications consuming the data should be able to deal with these changes. In this paper we investigate how LOD datasets change and what sensible measures there are to accommodate dataset dynamics. We compare our findings with traditional, document-centric studies concerning the “freshness” of the document collections and propose metrics for LOD datasets.",
"title": ""
},
{
"docid": "002acd845aa9776840dfe9e8755d7732",
"text": "A detailed study on the mechanism of band-to-band tunneling in carbon nanotube field-effect transistors (CNFETs) is presented. Through a dual-gated CNFET structure tunneling currents from the valence into the conduction band and vice versa can be enabled or disabled by changing the gate potential. Different from a conventional device where the Fermi distribution ultimately limits the gate voltage range for switching the device on or off, current flow is controlled here by the valence and conduction band edges in a bandpass-filter-like arrangement. We discuss how the structure of the nanotube is the key enabler of this particular one-dimensional tunneling effect.",
"title": ""
}
] | scidocsrr |
61b9619b02f8c7f3c0d2b06f4e6b6413 | Linux kernel vulnerabilities: state-of-the-art defenses and open problems | [
{
"docid": "3724a800d0c802203835ef9f68a87836",
"text": "This paper presents SUD, a system for running existing Linux device drivers as untrusted user-space processes. Even if the device driver is controlled by a malicious adversary, it cannot compromise the rest of the system. One significant challenge of fully isolating a driver is to confine the actions of its hardware device. SUD relies on IOMMU hardware, PCI express bridges, and messagesignaled interrupts to confine hardware devices. SUD runs unmodified Linux device drivers, by emulating a Linux kernel environment in user-space. A prototype of SUD runs drivers for Gigabit Ethernet, 802.11 wireless, sound cards, USB host controllers, and USB devices, and it is easy to add a new device class. SUD achieves the same performance as an in-kernel driver on networking benchmarks, and can saturate a Gigabit Ethernet link. SUD incurs a CPU overhead comparable to existing runtime driver isolation techniques, while providing much stronger isolation guarantees for untrusted drivers. Finally, SUD requires minimal changes to the kernel—just two kernel modules comprising 4,000 lines of code—which may at last allow the adoption of these ideas in practice.",
"title": ""
},
{
"docid": "68bab5e0579a0cdbaf232850e0587e11",
"text": "This article presents a new mechanism that enables applications to run correctly when device drivers fail. Because device drivers are the principal failing component in most systems, reducing driver-induced failures greatly improves overall reliability. Earlier work has shown that an operating system can survive driver failures [Swift et al. 2005], but the applications that depend on them cannot. Thus, while operating system reliability was greatly improved, application reliability generally was not.To remedy this situation, we introduce a new operating system mechanism called a shadow driver. A shadow driver monitors device drivers and transparently recovers from driver failures. Moreover, it assumes the role of the failed driver during recovery. In this way, applications using the failed driver, as well as the kernel itself, continue to function as expected.We implemented shadow drivers for the Linux operating system and tested them on over a dozen device drivers. Our results show that applications and the OS can indeed survive the failure of a variety of device drivers. Moreover, shadow drivers impose minimal performance overhead. Lastly, they can be introduced with only modest changes to the OS kernel and with no changes at all to existing device drivers.",
"title": ""
}
] | [
{
"docid": "68f10e252faf7171cac8d5ba914fcba9",
"text": "Most languages have no formal writing system and at best a limited written record. However, textual data is critical to natural language processing and particularly important for the training of language models that would facilitate speech recognition of such languages. Bilingual phonetic dictionaries are often available in some form, since lexicon creation is a fundamental task of documentary linguistics. We investigate the use of such dictionaries to improve language models when textual training data is limited to as few as 1k sentences. The method involves learning cross-lingual word embeddings as a pretraining step in the training of monolingual language models. Results across a number of languages show that language models are improved by such pre-training.",
"title": ""
},
{
"docid": "45b17b6521e84c8536ad852969b21c1d",
"text": "Previous research on online media popularity prediction concluded that the rise in popularity of online videos maintains a conventional logarithmic distribution. However, recent studies have shown that a significant portion of online videos exhibit bursty/sudden rise in popularity, which cannot be accounted for by video domain features alone. In this paper, we propose a novel transfer learning framework that utilizes knowledge from social streams (e.g., Twitter) to grasp sudden popularity bursts in online content. We develop a transfer learning algorithm that can learn topics from social streams allowing us to model the social prominence of video content and improve popularity predictions in the video domain. Our transfer learning framework has the ability to scale with incoming stream of tweets, harnessing physical world event information in real-time. Using data comprising of 10.2 million tweets and 3.5 million YouTube videos, we show that social prominence of the video topic (context) is responsible for the sudden rise in its popularity where social trends have a ripple effect as they spread from the Twitter domain to the video domain. We envision that our cross-domain popularity prediction model will be substantially useful for various media applications that could not be previously solved by traditional multimedia techniques alone.",
"title": ""
},
{
"docid": "28b7905d804cef8e54dbdf4f63f6495d",
"text": "The recently introduced Galois/Counter Mode (GCM) of operation for block ciphers provides both encryption and message authentication, using universal hashing based on multiplication in a binary finite field. We analyze its security and performance, and show that it is the most efficient mode of operation for high speed packet networks, by using a realistic model of a network crypto module and empirical data from studies of Internet traffic in conjunction with software experiments and hardware designs. GCM has several useful features: it can accept IVs of arbitrary length, can act as a stand-alone message authentication code (MAC), and can be used as an incremental MAC. We show that GCM is secure in the standard model of concrete security, even when these features are used. We also consider several of its important system-security aspects.",
"title": ""
},
{
"docid": "a83b417c2be604427eacf33b1db91468",
"text": "We report a male infant with iris coloboma, choanal atresia, postnatal retardation of growth and psychomotor development, genital anomaly, ear anomaly, and anal atresia. In addition, there was cutaneous syndactyly and nail hypoplasia of the second and third fingers on the right and hypoplasia of the left second finger nail. Comparable observations have rarely been reported and possibly represent genetic heterogeneity.",
"title": ""
},
{
"docid": "71759cdcf18dabecf1d002727eb9d8b8",
"text": "A commonly observed neural correlate of working memory is firing that persists after the triggering stimulus disappears. Substantial effort has been devoted to understanding the many potential mechanisms that may underlie memory-associated persistent activity. These rely either on the intrinsic properties of individual neurons or on the connectivity within neural circuits to maintain the persistent activity. Nevertheless, it remains unclear which mechanisms are at play in the many brain areas involved in working memory. Herein, we first summarize the palette of different mechanisms that can generate persistent activity. We then discuss recent work that asks which mechanisms underlie persistent activity in different brain areas. Finally, we discuss future studies that might tackle this question further. Our goal is to bridge between the communities of researchers who study either single-neuron biophysical, or neural circuit, mechanisms that can generate the persistent activity that underlies working memory.",
"title": ""
},
{
"docid": "0cd5813a069c8955871784cd3e63aa83",
"text": "Fundamental observations and principles derived from traditional physiological studies of multisensory integration have been difficult to reconcile with computational and psychophysical studies that share the foundation of probabilistic (Bayesian) inference. We review recent work on multisensory integration, focusing on experiments that bridge single-cell electrophysiology, psychophysics, and computational principles. These studies show that multisensory (visual-vestibular) neurons can account for near-optimal cue integration during the perception of self-motion. Unlike the nonlinear (superadditive) interactions emphasized in some previous studies, visual-vestibular neurons accomplish near-optimal cue integration through subadditive linear summation of their inputs, consistent with recent computational theories. Important issues remain to be resolved, including the observation that variations in cue reliability appear to change the weights that neurons apply to their different sensory inputs.",
"title": ""
},
{
"docid": "4dd0d34f6b67edee60f2e6fae5bd8dd9",
"text": "Virtual learning environments facilitate online learning, generating and storing large amounts of data during the learning/teaching process. This stored data enables extraction of valuable information using data mining. In this article, we present a systematic mapping, containing 42 papers, where data mining techniques are applied to predict students performance using Moodle data. Results show that decision trees are the most used classification approach. Furthermore, students interactions in forums are the main Moodle attribute analyzed by researchers.",
"title": ""
},
{
"docid": "03f98b18392bd178ea68ce19b13589fa",
"text": "Neural network techniques are widely used in network embedding, boosting the result of node classification, link prediction, visualization and other tasks in both aspects of efficiency and quality. All the state of art algorithms put effort on the neighborhood information and try to make full use of it. However, it is hard to recognize core periphery structures simply based on neighborhood. In this paper, we first discuss the influence brought by random-walk based sampling strategies to the embedding results. Theoretical and experimental evidences show that random-walk based sampling strategies fail to fully capture structural equivalence. We present a new method, SNS, that performs network embeddings using structural information (namely graphlets) to enhance its quality. SNS effectively utilizes both neighbor information and local-subgraphs similarity to learn node embeddings. This is the first framework that combines these two aspects as far as we know, positively merging two important areas in graph mining and machine learning. Moreover, we investigate what kinds of local-subgraph features matter the most on the node classification task, which enables us to further improve the embedding quality. Experiments show that our algorithm outperforms other unsupervised and semi-supervised neural network embedding algorithms on several real-world datasets.",
"title": ""
},
{
"docid": "4e46fb5c1abb3379519b04a84183b055",
"text": "Categorical models of emotions posit neurally and physiologically distinct human basic emotions. We tested this assumption by using multivariate pattern analysis (MVPA) to classify brain activity patterns of 6 basic emotions (disgust, fear, happiness, sadness, anger, and surprise) in 3 experiments. Emotions were induced with short movies or mental imagery during functional magnetic resonance imaging. MVPA accurately classified emotions induced by both methods, and the classification generalized from one induction condition to another and across individuals. Brain regions contributing most to the classification accuracy included medial and inferior lateral prefrontal cortices, frontal pole, precentral and postcentral gyri, precuneus, and posterior cingulate cortex. Thus, specific neural signatures across these regions hold representations of different emotional states in multimodal fashion, independently of how the emotions are induced. Similarity of subjective experiences between emotions was associated with similarity of neural patterns for the same emotions, suggesting a direct link between activity in these brain regions and the subjective emotional experience.",
"title": ""
},
{
"docid": "2f17160c9f01aa779b1745a57e34e1aa",
"text": "OBJECTIVE\nTo report an ataxic variant of Alzheimer disease expressing a novel molecular phenotype.\n\n\nDESIGN\nDescription of a novel phenotype associated with a presenilin 1 mutation.\n\n\nSETTING\nThe subject was an outpatient who was diagnosed at the local referral center.\n\n\nPATIENT\nA 28-year-old man presented with psychiatric symptoms and cerebellar signs, followed by cognitive dysfunction. Severe beta-amyloid (Abeta) deposition was accompanied by neurofibrillary tangles and cell loss in the cerebral cortex and by Purkinje cell dendrite loss in the cerebellum. A presenilin 1 gene (PSEN1) S170F mutation was detected.\n\n\nMAIN OUTCOME MEASURES\nWe analyzed the processing of Abeta precursor protein in vitro as well as the Abeta species in brain tissue.\n\n\nRESULTS\nThe PSEN1 S170F mutation induced a 3-fold increase of both secreted Abeta(42) and Abeta(40) species and a 60% increase of secreted Abeta precursor protein in transfected cells. Soluble and insoluble fractions isolated from brain tissue showed a prevalence of N-terminally truncated Abeta species ending at both residues 40 and 42.\n\n\nCONCLUSION\nThese findings define a new Alzheimer disease molecular phenotype and support the concept that the phenotypic variability associated with PSEN1 mutations may be dictated by the Abeta aggregates' composition.",
"title": ""
},
{
"docid": "f5df06ebd22d4eac95287b38a5c3cc6b",
"text": "We discuss the use of a double exponentially tapered slot antenna (DETSA) fabricated on flexible liquid crystal polymer (LCP) as a candidate for ultrawideband (UWB) communications systems. The features of the antenna and the effect of the antenna on a transmitted pulse are investigated. Return loss and E and H plane radiation pattern measurements are presented in several frequencies covering the whole ultra wide band. The return loss remains below -10 dB and the shape of the radiation pattern remains fairly constant in the whole UWB range (3.1 to 10.6 GHz). The main lobe characteristic of the radiation pattern remains stable even when the antenna is significantly conformed. The major effect of the conformation is an increase in the cross polarization component amplitude. The system: transmitter DETSA-channel receiver DETSA is measured in frequency domain and shows that the antenna adds very little distortion on a transmitted pulse. The distortion remains small even when both transmitter and receiver antennas are folded, although it increases slightly.",
"title": ""
},
{
"docid": "27bcbde431c340db7544b58faa597fb7",
"text": "Face and eye detection algorithms are deployed in a wide variety of applications. Unfortunately, there has been no quantitative comparison of how these detectors perform under difficult circumstances. We created a dataset of low light and long distance images which possess some of the problems encountered by face and eye detectors solving real world problems. The dataset we created is composed of reimaged images (photohead) and semi-synthetic heads imaged under varying conditions of low light, atmospheric blur, and distances of 3m, 50m, 80m, and 200m. This paper analyzes the detection and localization performance of the participating face and eye algorithms compared with the Viola Jones detector and four leading commercial face detectors. Performance is characterized under the different conditions and parameterized by per-image brightness and contrast. In localization accuracy for eyes, the groups/companies focusing on long-range face detection outperform leading commercial applications.",
"title": ""
},
{
"docid": "a583bbf2deac0bf99e2790c47598cddd",
"text": "We introduce TensorFlow Agents, an efficient infrastructure paradigm for building parallel reinforcement learning algorithms in TensorFlow. We simulate multiple environments in parallel, and group them to perform the neural network computation on a batch rather than individual observations. This allows the TensorFlow execution engine to parallelize computation, without the need for manual synchronization. Environments are stepped in separate Python processes to progress them in parallel without interference of the global interpreter lock. As part of this project, we introduce BatchPPO, an efficient implementation of the proximal policy optimization algorithm. By open sourcing TensorFlow Agents, we hope to provide a flexible starting point for future projects that accelerates future research in the field.",
"title": ""
},
{
"docid": "6e63767a96f0d57ecfe98f55c89ae778",
"text": "We investigate the use of Deep Q-Learning to control a simulated car via reinforcement learning. We start by implementing the approach of [5] ourselves, and then experimenting with various possible alterations to improve performance on our selected task. In particular, we experiment with various reward functions to induce specific driving behavior, double Q-learning, gradient update rules, and other hyperparameters. We find we are successfully able to train an agent to control the simulated car in JavaScript Racer [3] in some respects. Our agent successfully learned the turning operation, progressively gaining the ability to navigate larger sections of the simulated raceway without crashing. In obstacle avoidance, however, our agent faced challenges which we suspect are due to insufficient training time.",
"title": ""
},
{
"docid": "c71d27d4e4e9c85e3f5016fa36d20a16",
"text": "We present, GEM, the first heterogeneous graph neural network approach for detecting malicious accounts at Alipay, one of the world's leading mobile cashless payment platform. Our approach, inspired from a connected subgraph approach, adaptively learns discriminative embeddings from heterogeneous account-device graphs based on two fundamental weaknesses of attackers, i.e. device aggregation and activity aggregation. For the heterogeneous graph consists of various types of nodes, we propose an attention mechanism to learn the importance of different types of nodes, while using the sum operator for modeling the aggregation patterns of nodes in each type. Experiments show that our approaches consistently perform promising results compared with competitive methods over time.",
"title": ""
},
{
"docid": "fa99f24d38858b5951c7af587194f4e3",
"text": "Understanding foreign speech is difficult, in part because of unusual mappings between sounds and words. It is known that listeners in their native language can use lexical knowledge (about how words ought to sound) to learn how to interpret unusual speech-sounds. We therefore investigated whether subtitles, which provide lexical information, support perceptual learning about foreign speech. Dutch participants, unfamiliar with Scottish and Australian regional accents of English, watched Scottish or Australian English videos with Dutch, English or no subtitles, and then repeated audio fragments of both accents. Repetition of novel fragments was worse after Dutch-subtitle exposure but better after English-subtitle exposure. Native-language subtitles appear to create lexical interference, but foreign-language subtitles assist speech learning by indicating which words (and hence sounds) are being spoken.",
"title": ""
},
{
"docid": "951d3f81129ecafa2d271d4398d9b3e6",
"text": "The content-based image retrieval methods are developed to help people find what they desire based on preferred images instead of linguistic information. This paper focuses on capturing the image features representing details of the collar designs, which is important for people to choose clothing. The quality of the feature extraction methods is important for the queries. This paper presents several new methods for the collar-design feature extraction. A prototype of clothing image retrieval system based on relevance feedback approach and optimum-path forest algorithm is also developed to improve the query results and allows users to find clothing image of more preferred design. A series of experiments are conducted to test the qualities of the feature extraction methods and validate the effectiveness and efficiency of the RF-OPF prototype from multiple aspects. The evaluation scores of initial query results are used to test the qualities of the feature extraction methods. The average scores of all RF steps, the average numbers of RF iterations taken before achieving desired results and the score transition of RF iterations are used to validate the effectiveness and efficiency of the proposed RF-OPF prototype.",
"title": ""
},
{
"docid": "37b60f30aba47a0c2bb3d31c848ee4bc",
"text": "This research analyzed the perception of Makassar’s teenagers toward Korean drama and music and their influences to them. Interviews and digital recorder were provided as instruments of the research to ten respondents who are members of Makassar Korean Lover Community. Then, in analyzing data the researchers used descriptive qualitative method that aimed to get deep information about Korean wave in Makassar. The Results of the study found that Makassar’s teenagers put enormous interest in Korean culture especially Korean drama and music. However, most respondents also realize that the presence of Korean culture has a great negative impact to them and their environments. Korean culture itself gives effect in several aspects such as the influence on behavior, Influence on the taste and Influence on the environment as well.",
"title": ""
},
{
"docid": "8b548e2c1922e6e105ab40b60fd7433c",
"text": "Although there have been many decades of research and commercial presence on high performance general purpose processors, there are still many applications that require fully customized hardware architectures for further computational acceleration. Recently, deep learning has been successfully used to learn in a wide variety of applications, but their heavy computation demand has considerably limited their practical applications. This paper proposes a fully pipelined acceleration architecture to alleviate high computational demand of an artificial neural network (ANN) which is restricted Boltzmann machine (RBM) ANNs. The implemented RBM ANN accelerator (integrating $1024\\times 1024$ network size, using 128 input cases per batch, and running at a 303-MHz clock frequency) integrated in a state-of-the art field-programmable gate array (FPGA) (Xilinx Virtex 7 XC7V-2000T) provides a computational performance of 301-billion connection-updates-per-second and about 193 times higher performance than a software solution running on general purpose processors. Most importantly, the architecture enables over 4 times (12 times in batch learning) higher performance compared with a previous work when both are implemented in an FPGA device (XC2VP70).",
"title": ""
},
{
"docid": "56e406924a967700fba3fe554b9a8484",
"text": "Wearable orthoses can function both as assistive devices, which allow the user to live independently, and as rehabilitation devices, which allow the user to regain use of an impaired limb. To be fully wearable, such devices must have intuitive controls, and to improve quality of life, the device should enable the user to perform Activities of Daily Living. In this context, we explore the feasibility of using electromyography (EMG) signals to control a wearable exotendon device to enable pick and place tasks. We use an easy to don, commodity forearm EMG band with 8 sensors to create an EMG pattern classification control for an exotendon device. With this control, we are able to detect a user's intent to open, and can thus enable extension and pick and place tasks. In experiments with stroke survivors, we explore the accuracy of this control in both non-functional and functional tasks. Our results support the feasibility of developing wearable devices with intuitive controls which provide a functional context for rehabilitation.",
"title": ""
}
] | scidocsrr |
085d8ef9f29229887533b78ad8a9273a | Pain catastrophizing and kinesiophobia: predictors of chronic low back pain. | [
{
"docid": "155411fe242dd4f3ab39649d20f5340f",
"text": "Two studies are presented that investigated 'fear of movement/(re)injury' in chronic musculoskeletal pain and its relation to behavioral performance. The 1st study examines the relation among fear of movement/(re)injury (as measured with the Dutch version of the Tampa Scale for Kinesiophobia (TSK-DV)) (Kori et al. 1990), biographical variables (age, pain duration, gender, use of supportive equipment, compensation status), pain-related variables (pain intensity, pain cognitions, pain coping) and affective distress (fear and depression) in a group of 103 chronic low back pain (CLBP) patients. In the 2nd study, motoric, psychophysiologic and self-report measures of fear are taken from 33 CLBP patients who are exposed to a single and relatively simple movement. Generally, findings demonstrated that the fear of movement/(re)injury is related to gender and compensation status, and more closely to measures of catastrophizing and depression, but in a much lesser degree to pain coping and pain intensity. Furthermore, subjects who report a high degree of fear of movement/(re)injury show more fear and escape/avoidance when exposed to a simple movement. The discussion focuses on the clinical relevance of the construct of fear of movement/(re)injury and research questions that remain to be answered.",
"title": ""
}
] | [
{
"docid": "f031d0db43b5f9d9d3068916ea975d75",
"text": "Difficulties in the social domain and motor anomalies have been widely investigated in Autism Spectrum Disorder (ASD). However, they have been generally considered as independent, and therefore tackled separately. Recent advances in neuroscience have hypothesized that the cortical motor system can play a role not only as a controller of elementary physical features of movement, but also in a complex domain as social cognition. Here, going beyond previous studies on ASD that described difficulties in the motor and in the social domain separately, we focus on the impact of motor mechanisms anomalies on social functioning. We consider behavioral, electrophysiological and neuroimaging findings supporting the idea that motor cognition is a critical \"intermediate phenotype\" for ASD. Motor cognition anomalies in ASD affect the processes of extraction, codification and subsequent translation of \"external\" social information into the motor system. Intriguingly, this alternative \"motor\" approach to the social domain difficulties in ASD may be promising to bridge the gap between recent experimental findings and clinical practice, potentially leading to refined preventive approaches and successful treatments.",
"title": ""
},
{
"docid": "70991373ae71f233b0facd2b5dd1a0d3",
"text": "Information communications technology systems are facing an increasing number of cyber security threats, the majority of which are originated by insiders. As insiders reside behind the enterprise-level security defence mechanisms and often have privileged access to the network, detecting and preventing insider threats is a complex and challenging problem. In fact, many schemes and systems have been proposed to address insider threats from different perspectives, such as intent, type of threat, or available audit data source. This survey attempts to line up these works together with only three most common types of insider namely traitor, masquerader, and unintentional perpetrator, while reviewing the countermeasures from a data analytics perspective. Uniquely, this survey takes into account the early stage threats which may lead to a malicious insider rising up. When direct and indirect threats are put on the same page, all the relevant works can be categorised as host, network, or contextual data-based according to audit data source and each work is reviewed for its capability against insider threats, how the information is extracted from the engaged data sources, and what the decision-making algorithm is. The works are also compared and contrasted. Finally, some issues are raised based on the observations from the reviewed works and new research gaps and challenges identified.",
"title": ""
},
{
"docid": "c630b600a0b03e9e3ede1c0132f80264",
"text": "68 AI MAGAZINE Adaptive graphical user interfaces (GUIs) automatically tailor the presentation of functionality to better fit an individual user’s tasks, usage patterns, and abilities. A familiar example of an adaptive interface is the Windows XP start menu, where a small set of applications from the “All Programs” submenu is replicated in the top level of the “Start” menu for easier access, saving users from navigating through multiple levels of the menu hierarchy (figure 1). The potential of adaptive interfaces to reduce visual search time, cognitive load, and motor movement is appealing, and when the adaptation is successful an adaptive interface can be faster and preferred in comparison to a nonadaptive counterpart (for example, Gajos et al. [2006], Greenberg and Witten [1985]). In practice, however, many challenges exist, and, thus far, evaluation results of adaptive interfaces have been mixed. For an adaptive interface to be successful, the benefits of correct adaptations must outweigh the costs, or usability side effects, of incorrect adaptations. Often, an adaptive mechanism designed to improve one aspect of the interaction, typically motor movement or visual search, inadvertently increases effort along another dimension, such as cognitive or perceptual load. The result is that many adaptive designs that were expected to confer a benefit along one of these dimensions have failed in practice. For example, a menu that tracks how frequently each item is used and adaptively reorders itself so that items appear in order from most to least frequently accessed should improve motor performance, but in reality this design can slow users down and reduce satisfaction because of the constantly changing layout (Mitchell and Schneiderman [1989]; for example, figure 2b). Commonly cited issues with adaptive interfaces include the lack of control the user has over the adaptive process and the difficulty that users may have in predicting what the system’s response will be to a user action (Höök 2000). User evaluation of adaptive GUIs is more complex than eval-",
"title": ""
},
{
"docid": "4facc72eb8270d12d0182c7a7833736f",
"text": "We construct a family of extremely simple bijections that yield Cayley’s famous formula for counting trees. The weight preserving properties of these bijections furnish a number of multivariate generating functions for weighted Cayley trees. Essentially the same idea is used to derive bijective proofs and q-analogues for the number of spanning trees of other graphs, including the complete bipartite and complete tripartite graphs. These bijections also allow the calculation of explicit formulas for the expected number of various statistics on Cayley trees.",
"title": ""
},
{
"docid": "47949e080b4f5643dde02eb1c5c2527f",
"text": "Extracting biomedical entities and their relations from text has important applications on biomedical research. Previous work primarily utilized feature-based pipeline models to process this task. Many efforts need to be made on feature engineering when feature-based models are employed. Moreover, pipeline models may suffer error propagation and are not able to utilize the interactions between subtasks. Therefore, we propose a neural joint model to extract biomedical entities as well as their relations simultaneously, and it can alleviate the problems above. Our model was evaluated on two tasks, i.e., the task of extracting adverse drug events between drug and disease entities, and the task of extracting resident relations between bacteria and location entities. Compared with the state-of-the-art systems in these tasks, our model improved the F1 scores of the first task by 5.1% in entity recognition and 8.0% in relation extraction, and that of the second task by 9.2% in relation extraction. The proposed model achieves competitive performances with less work on feature engineering. We demonstrate that the model based on neural networks is effective for biomedical entity and relation extraction. In addition, parameter sharing is an alternative method for neural models to jointly process this task. Our work can facilitate the research on biomedical text mining.",
"title": ""
},
{
"docid": "1c126457ee6b61be69448ee00a64d557",
"text": "Class imbalance is a common problem in the case of real-world object detection and classification tasks. Data of some classes are abundant, making them an overrepresented majority, and data of other classes are scarce, making them an underrepresented minority. This imbalance makes it challenging for a classifier to appropriately learn the discriminating boundaries of the majority and minority classes. In this paper, we propose a cost-sensitive (CoSen) deep neural network, which can automatically learn robust feature representations for both the majority and minority classes. During training, our learning procedure jointly optimizes the class-dependent costs and the neural network parameters. The proposed approach is applicable to both binary and multiclass problems without any modification. Moreover, as opposed to data-level approaches, we do not alter the original data distribution, which results in a lower computational cost during the training process. We report the results of our experiments on six major image classification data sets and show that the proposed approach significantly outperforms the baseline algorithms. Comparisons with popular data sampling techniques and CoSen classifiers demonstrate the superior performance of our proposed method.",
"title": ""
},
{
"docid": "3a852aa880c564a85cc8741ce7427ced",
"text": "INTRODUCTION\nTumeric is a spice that comes from the root Curcuma longa, a member of the ginger family, Zingaberaceae. In Ayurveda (Indian traditional medicine), tumeric has been used for its medicinal properties for various indications and through different routes of administration, including topically, orally, and by inhalation. Curcuminoids are components of tumeric, which include mainly curcumin (diferuloyl methane), demethoxycurcumin, and bisdemethoxycurcmin.\n\n\nOBJECTIVES\nThe goal of this systematic review of the literature was to summarize the literature on the safety and anti-inflammatory activity of curcumin.\n\n\nMETHODS\nA search of the computerized database MEDLINE (1966 to January 2002), a manual search of bibliographies of papers identified through MEDLINE, and an Internet search using multiple search engines for references on this topic was conducted. The PDR for Herbal Medicines, and four textbooks on herbal medicine and their bibliographies were also searched.\n\n\nRESULTS\nA large number of studies on curcumin were identified. These included studies on the antioxidant, anti-inflammatory, antiviral, and antifungal properties of curcuminoids. Studies on the toxicity and anti-inflammatory properties of curcumin have included in vitro, animal, and human studies. A phase 1 human trial with 25 subjects using up to 8000 mg of curcumin per day for 3 months found no toxicity from curcumin. Five other human trials using 1125-2500 mg of curcumin per day have also found it to be safe. These human studies have found some evidence of anti-inflammatory activity of curcumin. The laboratory studies have identified a number of different molecules involved in inflammation that are inhibited by curcumin including phospholipase, lipooxygenase, cyclooxygenase 2, leukotrienes, thromboxane, prostaglandins, nitric oxide, collagenase, elastase, hyaluronidase, monocyte chemoattractant protein-1 (MCP-1), interferon-inducible protein, tumor necrosis factor (TNF), and interleukin-12 (IL-12).\n\n\nCONCLUSIONS\nCurcumin has been demonstrated to be safe in six human trials and has demonstrated anti-inflammatory activity. It may exert its anti-inflammatory activity by inhibition of a number of different molecules that play a role in inflammation.",
"title": ""
},
{
"docid": "4272b4a73ecd9d2b60e0c60de0469f17",
"text": "Suggesting that empirical work in the field of reading has advanced sufficiently to allow substantial agreed-upon results and conclusions, this literature review cuts through the detail of partially convergent, sometimes discrepant research findings to provide an integrated picture of how reading develops and how reading instruction should proceed. The focus of the review is prevention. Sketched is a picture of the conditions under which reading is most likely to develop easily--conditions that include stimulating preschool environments, excellent reading instruction, and the absence of any of a wide array of risk factors. It also provides recommendations for practice as well as recommendations for further research. After a preface and executive summary, chapters are (1) Introduction; (2) The Process of Learning to Read; (3) Who Has Reading Difficulties; (4) Predic:ors of Success and Failure in Reading; (5) Preventing Reading Difficulties before Kindergarten; (6) Instructional Strategies for Kindergarten and the Primary Grades; (7) Organizational Strategies for Kindergarten and the Primary Grades; (8) Helping Children with Reading Difficulties in Grades 1 to 3; (9) The Agents of Change; and (10) Recommendations for Practice and Research. Contains biographical sketches of the committee members and an index. Contains approximately 800 references.",
"title": ""
},
{
"docid": "1f50a6d6e7c48efb7ffc86bcc6a8271d",
"text": "Creating short summaries of documents with respect to a query has applications in for example search engines, where it may help inform users of the most relevant results. Constructing such a summary automatically, with the potential expressiveness of a human-written summary, is a difficult problem yet to be fully solved. In this thesis, a neural network model for this task is presented. We adapt an existing dataset of news article summaries for the task and train a pointer-generator model using this dataset to summarize such articles. The generated summaries are then evaluated by measuring similarity to reference summaries. We observe that the generated summaries exhibit abstractive properties, but also that they have issues, such as rarely being truthful. However, we show that a neural network summarization model, similar to existing neural network models for abstractive summarization, can be constructed to make use of queries for more targeted summaries.",
"title": ""
},
{
"docid": "d994b23ea551f23215232c0771e7d6b3",
"text": "It is said that there’s nothing so practical as good theory. It may also be said that there’s nothing so theoretically interesting as good practice1. This is particularly true of efforts to relate constructivism as a theory of learning to the practice of instruction. Our goal in this paper is to provide a clear link between the theoretical principles of constructivism, the practice of instructional design, and the practice of teaching. We will begin with a basic characterization of constructivism identifying what we believe to be the central principles in learning and understanding. We will then identify and elaborate on eight instructional principles for the design of a constructivist learning environment. Finally, we will examine what we consider to be one of the best exemplars of a constructivist learning environment -Problem Based Learning as described by Barrows (1985, 1986, 1992).",
"title": ""
},
{
"docid": "9961f44d4ab7d0a344811186c9234f2c",
"text": "This paper discusses the trust related issues and arguments (evidence) Internet stores need to provide in order to increase consumer trust. Based on a model of trust from academic literature, in addition to a model of the customer service life cycle, the paper develops a framework that identifies key trust-related issues and organizes them into four categories: personal information, product quality and price, customer service, and store presence. It is further validated by comparing the issues it raises to issues identified in a review of academic studies, and to issues of concern identified in two consumer surveys. The framework is also applied to ten well-known web sites to demonstrate its applicability. The proposed framework will benefit both practitioners and researchers by identifying important issues regarding trust, which need to be accounted for in Internet stores. For practitioners, it provides a guide to the issues Internet stores need to address in their use of arguments. For researchers, it can be used as a foundation for future empirical studies investigating the effects of trust-related arguments on consumers’ trust in Internet stores.",
"title": ""
},
{
"docid": "9373cde066d8d898674a519206f1c38f",
"text": "This work proposes a novel deep network architecture to solve the camera ego-motion estimation problem. A motion estimation network generally learns features similar to optical flow (OF) fields starting from sequences of images. This OF can be described by a lower dimensional latent space. Previous research has shown how to find linear approximations of this space. We propose to use an autoencoder network to find a nonlinear representation of the OF manifold. In addition, we propose to learn the latent space jointly with the estimation task, so that the learned OF features become a more robust description of the OF input. We call this novel architecture latent space visual odometry (LS-VO). The experiments show that LS-VO achieves a considerable increase in performances with respect to baselines, while the number of parameters of the estimation network only slightly increases.",
"title": ""
},
{
"docid": "f6ad0d01cb66c1260c1074c4f35808c6",
"text": "BACKGROUND\nUnilateral spatial neglect causes difficulty attending to one side of space. Various rehabilitation interventions have been used but evidence of their benefit is lacking.\n\n\nOBJECTIVES\nTo assess whether cognitive rehabilitation improves functional independence, neglect (as measured using standardised assessments), destination on discharge, falls, balance, depression/anxiety and quality of life in stroke patients with neglect measured immediately post-intervention and at longer-term follow-up; and to determine which types of interventions are effective and whether cognitive rehabilitation is more effective than standard care or an attention control.\n\n\nSEARCH METHODS\nWe searched the Cochrane Stroke Group Trials Register (last searched June 2012), MEDLINE (1966 to June 2011), EMBASE (1980 to June 2011), CINAHL (1983 to June 2011), PsycINFO (1974 to June 2011), UK National Research Register (June 2011). We handsearched relevant journals (up to 1998), screened reference lists, and tracked citations using SCISEARCH.\n\n\nSELECTION CRITERIA\nWe included randomised controlled trials (RCTs) of cognitive rehabilitation specifically aimed at spatial neglect. We excluded studies of general stroke rehabilitation and studies with mixed participant groups, unless more than 75% of their sample were stroke patients or separate stroke data were available.\n\n\nDATA COLLECTION AND ANALYSIS\nTwo review authors independently selected studies, extracted data, and assessed study quality. For subgroup analyses, review authors independently categorised the approach underlying the cognitive intervention as either 'top-down' (interventions that encourage awareness of the disability and potential compensatory strategies) or 'bottom-up' (interventions directed at the impairment but not requiring awareness or behavioural change, e.g. wearing prisms or patches).\n\n\nMAIN RESULTS\nWe included 23 RCTs with 628 participants (adding 11 new RCTs involving 322 new participants for this update). Only 11 studies were assessed to have adequate allocation concealment, and only four studies to have a low risk of bias in all categories assessed. Most studies measured outcomes using standardised neglect assessments: 15 studies measured effect on activities of daily living (ADL) immediately after the end of the intervention period, but only six reported persisting effects on ADL. One study (30 participants) reported discharge destination and one study (eight participants) reported the number of falls.Eighteen of the 23 included RCTs compared cognitive rehabilitation with any control intervention (placebo, attention or no treatment). Meta-analyses demonstrated no statistically significant effect of cognitive rehabilitation, compared with control, for persisting effects on either ADL (five studies, 143 participants) or standardised neglect assessments (eight studies, 172 participants), or for immediate effects on ADL (10 studies, 343 participants). In contrast, we found a statistically significant effect in favour of cognitive rehabilitation compared with control, for immediate effects on standardised neglect assessments (16 studies, 437 participants, standardised mean difference (SMD) 0.35, 95% confidence interval (CI) 0.09 to 0.62). However, sensitivity analyses including only studies of high methodological quality removed evidence of a significant effect of cognitive rehabilitation.Additionally, five of the 23 included RCTs compared one cognitive rehabilitation intervention with another. These included three studies comparing a visual scanning intervention with another cognitive rehabilitation intervention, and two studies (three comparison groups) comparing a visual scanning intervention plus another cognitive rehabilitation intervention with a visual scanning intervention alone. Only two small studies reported a measure of functional disability and there was considerable heterogeneity within these subgroups (I² > 40%) when we pooled standardised neglect assessment data, limiting the ability to draw generalised conclusions.Subgroup analyses exploring the effect of having an attention control demonstrated some evidence of a statistically significant difference between those comparing rehabilitation with attention control and those with another control or no treatment group, for immediate effects on standardised neglect assessments (test for subgroup differences, P = 0.04).\n\n\nAUTHORS' CONCLUSIONS\nThe effectiveness of cognitive rehabilitation interventions for reducing the disabling effects of neglect and increasing independence remains unproven. As a consequence, no rehabilitation approach can be supported or refuted based on current evidence from RCTs. However, there is some very limited evidence that cognitive rehabilitation may have an immediate beneficial effect on tests of neglect. This emerging evidence justifies further clinical trials of cognitive rehabilitation for neglect. However, future studies need to have appropriate high quality methodological design and reporting, to examine persisting effects of treatment and to include an attention control comparator.",
"title": ""
},
{
"docid": "b883116f741733b3bbd3933fdc1b4542",
"text": "To address concerns of TREC-style relevance judgments, we explore two improvements. The first one seeks to make relevance judgments contextual, collecting in situ feedback of users in an interactive search session and embracing usefulness as the primary judgment criterion. The second one collects multidimensional assessments to complement relevance or usefulness judgments, with four distinct alternative aspects examined in this paper - novelty, understandability, reliability, and effort.\n We evaluate different types of judgments by correlating them with six user experience measures collected from a lab user study. Results show that switching from TREC-style relevance criteria to usefulness is fruitful, but in situ judgments do not exhibit clear benefits over the judgments collected without context. In contrast, combining relevance or usefulness with the four alternative judgments consistently improves the correlation with user experience measures, suggesting future IR systems should adopt multi-aspect search result judgments in development and evaluation.\n We further examine implicit feedback techniques for predicting these judgments. We find that click dwell time, a popular indicator of search result quality, is able to predict some but not all dimensions of the judgments. We enrich the current implicit feedback methods using post-click user interaction in a search session and achieve better prediction for all six dimensions of judgments.",
"title": ""
},
{
"docid": "6702bfca88f86e0c35a8b6195d0c971c",
"text": "A hierarchical scheme for clustering data is presented which applies to spaces with a high number of dimensions ( 3 D N > ). The data set is first reduced to a smaller set of partitions (multi-dimensional bins). Multiple clustering techniques are used, including spectral clustering; however, new techniques are also introduced based on the path length between partitions that are connected to one another. A Line-of-Sight algorithm is also developed for clustering. A test bank of 12 data sets with varying properties is used to expose the strengths and weaknesses of each technique. Finally, a robust clustering technique is discussed based on reaching a consensus among the multiple approaches, overcoming the weaknesses found individually.",
"title": ""
},
{
"docid": "edfc9cb39fe45a43aed78379bafa2dfc",
"text": "We propose a novel decomposition framework for the distributed optimization of general nonconvex sum-utility functions arising naturally in the system design of wireless multi-user interfering systems. Our main contributions are i) the development of the first class of (inexact) Jacobi best-response algorithms with provable convergence, where all the users simultaneously and iteratively solve a suitably convexified version of the original sum-utility optimization problem; ii) the derivation of a general dynamic pricing mechanism that provides a unified view of existing pricing schemes that are based, instead, on heuristics; and iii) a framework that can be easily particularized to well-known applications, giving rise to very efficient practical (Jacobi or Gauss-Seidel) algorithms that outperform existing ad hoc methods proposed for very specific problems. Interestingly, our framework contains as special cases well-known gradient algorithms for nonconvex sum-utility problems, and many block-coordinate descent schemes for convex functions.",
"title": ""
},
{
"docid": "b559579485358f7958eea8907c8b4b09",
"text": "Word embedding models learn a distributed vectorial representation for words, which can be used as the basis for (deep) learning models to solve a variety of natural language processing tasks. One of the main disadvantages of current word embedding models is that they learn a single representation for each word in a metric space, as a result of which they cannot appropriately model polysemous words. In this work, we develop a new word embedding model that can accurately represent such words by automatically learning multiple representations for each word, whilst remaining computationally efficient. Without any supervision, our model learns multiple, complementary embeddings that all capture different semantic structure. We demonstrate the potential merits of our model by training it on large text corpora, and evaluating it on word similarity tasks. Our proposed embedding model is competitive with the state of the art and can easily scale to large corpora due to its computational simplicity.",
"title": ""
},
{
"docid": "8700e170ba9c3e6c35008e2ccff48ef9",
"text": "Recently, Uber has emerged as a leader in the \"sharing economy\". Uber is a \"ride sharing\" service that matches willing drivers with customers looking for rides. However, unlike other open marketplaces (e.g., AirBnB), Uber is a black-box: they do not provide data about supply or demand, and prices are set dynamically by an opaque \"surge pricing\" algorithm. The lack of transparency has led to concerns about whether Uber artificially manipulate prices, and whether dynamic prices are fair to customers and drivers. In order to understand the impact of surge pricing on passengers and drivers, we present the first in-depth investigation of Uber. We gathered four weeks of data from Uber by emulating 43 copies of the Uber smartphone app and distributing them throughout downtown San Francisco (SF) and midtown Manhattan. Using our dataset, we are able to characterize the dynamics of Uber in SF and Manhattan, as well as identify key implementation details of Uber's surge price algorithm. Our observations about Uber's surge price algorithm raise important questions about the fairness and transparency of this system.",
"title": ""
},
{
"docid": "1272563e64ca327aba1be96f2e045c30",
"text": "Current Web search engines are built to serve all users, independent of the special needs of any individual user. Personalization of Web search is to carry out retrieval for each user incorporating his/her interests. We propose a novel technique to learn user profiles from users' search histories. The user profiles are then used to improve retrieval effectiveness in Web search. A user profile and a general profile are learned from the user's search history and a category hierarchy, respectively. These two profiles are combined to map a user query into a set of categories which represent the user's search intention and serve as a context to disambiguate the words in the user's query. Web search is conducted based on both the user query and the set of categories. Several profile learning and category mapping algorithms and a fusion algorithm are provided and evaluated. Experimental results indicate that our technique to personalize Web search is both effective and efficient.",
"title": ""
},
{
"docid": "750e7bd1b23da324a0a51d0b589acbfb",
"text": "Various powerful people detection methods exist. Surprisingly, most approaches rely on static image features only despite the obvious potential of motion information for people detection. This paper systematically evaluates different features and classifiers in a sliding-window framework. First, our experiments indicate that incorporating motion information improves detection performance significantly. Second, the combination of multiple and complementary feature types can also help improve performance. And third, the choice of the classifier-feature combination and several implementation details are crucial to reach best performance. In contrast to many recent papers experimental results are reported for four different datasets rather than using a single one. Three of them are taken from the literature allowing for direct comparison. The fourth dataset is newly recorded using an onboard camera driving through urban environment. Consequently this dataset is more realistic and more challenging than any currently available dataset.",
"title": ""
}
] | scidocsrr |
b0bf55e123a1d0efe1fd44d5b3c1e4e9 | Oruta: Privacy-Preserving Public Auditing for Shared Data in the Cloud | [
{
"docid": "70cc8c058105b905eebdf941ca2d3f2e",
"text": "Cloud computing is an emerging computing paradigm in which resources of the computing infrastructure are provided as services over the Internet. As promising as it is, this paradigm also brings forth many new challenges for data security and access control when users outsource sensitive data for sharing on cloud servers, which are not within the same trusted domain as data owners. To keep sensitive user data confidential against untrusted servers, existing solutions usually apply cryptographic methods by disclosing data decryption keys only to authorized users. However, in doing so, these solutions inevitably introduce a heavy computation overhead on the data owner for key distribution and data management when fine-grained data access control is desired, and thus do not scale well. The problem of simultaneously achieving fine-grainedness, scalability, and data confidentiality of access control actually still remains unresolved. This paper addresses this challenging open issue by, on one hand, defining and enforcing access policies based on data attributes, and, on the other hand, allowing the data owner to delegate most of the computation tasks involved in fine-grained data access control to untrusted cloud servers without disclosing the underlying data contents. We achieve this goal by exploiting and uniquely combining techniques of attribute-based encryption (ABE), proxy re-encryption, and lazy re-encryption. Our proposed scheme also has salient properties of user access privilege confidentiality and user secret key accountability. Extensive analysis shows that our proposed scheme is highly efficient and provably secure under existing security models.",
"title": ""
}
] | [
{
"docid": "8f78f2efdd2fecaf32fbc7f5ffa79218",
"text": "Evolutionary population dynamics (EPD) deal with the removal of poor individuals in nature. It has been proven that this operator is able to improve the median fitness of the whole population, a very effective and cheap method for improving the performance of meta-heuristics. This paper proposes the use of EPD in the grey wolf optimizer (GWO). In fact, EPD removes the poor search agents of GWO and repositions them around alpha, beta, or delta wolves to enhance exploitation. The GWO is also required to randomly reinitialize its worst search agents around the search space by EPD to promote exploration. The proposed GWO–EPD algorithm is benchmarked on six unimodal and seven multi-modal test functions. The results are compared to the original GWO algorithm for verification. It is demonstrated that the proposed operator is able to significantly improve the performance of the GWO algorithm in terms of exploration, local optima avoidance, exploitation, local search, and convergence rate.",
"title": ""
},
{
"docid": "8905bd760b0c72fbfe4fbabd778ff408",
"text": "Boredom and low levels of task engagement while driving can pose road safety risks, e.g., inattention during low traffic, routine trips, or semi-automated driving. Digital technology interventions that increase task engagement, e.g., through performance feedback, increased challenge, and incentives (often referred to as ‘gamification’), could therefore offer safety benefits. To explore the impact of such interventions, we conducted experiments in a highfidelity driving simulator with thirty-two participants. In two counterbalanced conditions (control and intervention), we compared driving behaviour, physiological arousal, and subjective experience. Results indicate that the gamified boredom intervention reduced unsafe coping mechanisms such as speeding while promoting anticipatory driving. We can further infer that the intervention not only increased one’s attention and arousal during the intermittent gamification challenges, but that these intermittent stimuli may also help sustain one’s attention and arousal in between challenges and throughout a drive. At the same time, the gamified condition led to slower hazard reactions and short off-road glances. Our contributions deepen our understanding of driver boredom and pave the way for engaging interventions for safety critical tasks.",
"title": ""
},
{
"docid": "d5d96493b34cfbdf135776e930ec5979",
"text": "We propose an approach for the static analysis of probabilistic programs that sense, manipulate, and control based on uncertain data. Examples include programs used in risk analysis, medical decision making and cyber-physical systems. Correctness properties of such programs take the form of queries that seek the probabilities of assertions over program variables. We present a static analysis approach that provides guaranteed interval bounds on the values (assertion probabilities) of such queries. First, we observe that for probabilistic programs, it is possible to conclude facts about the behavior of the entire program by choosing a finite, adequate set of its paths. We provide strategies for choosing such a set of paths and verifying its adequacy. The queries are evaluated over each path by a combination of symbolic execution and probabilistic volume-bound computations. Each path yields interval bounds that can be summed up with a \"coverage\" bound to yield an interval that encloses the probability of assertion for the program as a whole. We demonstrate promising results on a suite of benchmarks from many different sources including robotic manipulators and medical decision making programs.",
"title": ""
},
{
"docid": "90c3543eca7a689188725e610e106ce9",
"text": "Lithium-based battery technology offers performance advantages over traditional battery technologies at the cost of increased monitoring and controls overhead. Multiple-cell Lead-Acid battery packs can be equalized by a controlled overcharge, eliminating the need to periodically adjust individual cells to match the rest of the pack. Lithium-based based batteries cannot be equalized by an overcharge, so alternative methods are required. This paper discusses several cell-balancing methodologies. Active cell balancing methods remove charge from one or more high cells and deliver the charge to one or more low cells. Dissipative techniques find the high cells in the pack, and remove excess energy through a resistive element until their charges match the low cells. This paper presents the theory of charge balancing techniques and the advantages and disadvantages of the presented methods. INTRODUCTION Lithium Ion and Lithium Polymer battery chemistries cannot be overcharged without damaging active materials [1-5]. The electrolyte breakdown voltage is precariously close to the fully charged terminal voltage, typically in the range of 4.1 to 4.3 volts/cell. Therefore, careful monitoring and controls must be implemented to avoid any single cell from experiencing an overvoltage due to excessive charging. Single lithium-based cells require monitoring so that cell voltage does not exceed predefined limits of the chemistry. Series connected lithium cells pose a more complex problem: each cell in the string must be monitored and controlled. Even though the pack voltage may appear to be within acceptable limits, one cell of the series string may be experiencing damaging voltage due to cell-to-cell imbalances. Traditionally, cell-to-cell imbalances in lead-acid batteries have been solved by controlled overcharging [6,7]. Leadacid batteries can be brought into overcharge conditions without permanent cell damage, as the excess energy is released by gassing. This gassing mechanism is the natural method for balancing a series string of lead acid battery cells. Other chemistries, such as NiMH, exhibit similar natural cell-to-cell balancing mechanisms [8]. Because a Lithium battery cannot be overcharged, there is no natural mechanism for cell equalization. Therefore, an alternative method must be employed. This paper discusses three categories of cell balancing methodologies: charging methods, active methods, and passive methods. Cell balancing is necessary for highly transient lithium battery applications, especially those applications where charging occurs frequently, such as regenerative braking in electric vehicle (EV) or hybrid electric vehicle (HEV) applications. Regenerative braking can cause problems for Lithium Ion batteries because the instantaneous regenerative braking current inrush can cause battery voltage to increase suddenly, possibly over the electrolyte breakdown threshold voltage. Deviations in cell behaviors generally occur because of two phenomenon: changes in internal impedance or cell capacity reduction due to aging. In either case, if one cell in a battery pack experiences deviant cell behavior, that cell becomes a likely candidate to overvoltage during high power charging events. Cells with reduced capacity or high internal impedance tend to have large voltage swings when charging and discharging. For HEV applications, it is necessary to cell balance lithium chemistry because of this overvoltage potential. For EV applications, cell balancing is desirable to obtain maximum usable capacity from the battery pack. During charging, an out-of-balance cell may prematurely approach the end-of-charge voltage (typically 4.1 to 4.3 volts/cell) and trigger the charger to turn off. Cell balancing is useful to control the higher voltage cells until the rest of the cells can catch up. In this way, the charger is not turned off until the cells simultaneously reach the end-of-charge voltage. END-OF-CHARGE CELL BALANCING METHODS Typically, cell-balancing methods employed during and at end-of-charging are useful only for electric vehicle purposes. This is because electric vehicle batteries are generally fully charged between each use cycle. Hybrid electric vehicle batteries may or may not be maintained fully charged, resulting in unpredictable end-of-charge conditions to enact the balancing mechanism. Hybrid vehicle batteries also require both high power charge (regenerative braking) and discharge (launch assist or boost) capabilities. For this reason, their batteries are usually maintained at a SOC that can discharge the required power but still have enough headroom to accept the necessary regenerative power. To fully charge the HEV battery for cell balancing would diminish charge acceptance capability (regenerative braking). CHARGE SHUNTING The charge-shunting cell balancing method selectively shunts the charging current around each cell as they become fully charged (Figure 1). This method is most efficiently employed on systems with known charge rates. The shunt resistor R is sized to shunt exactly the charging current I when the fully charged cell voltage V is reached. If the charging current decreases, resistor R will discharge the shunted cell. To avoid extremely large power dissipations due to R, this method is best used with stepped-current chargers with a small end-of-charge current.",
"title": ""
},
{
"docid": "e49f9ad79d3d4d31003c0cda7d7d49c5",
"text": "Greater trochanter pain syndrome due to tendinopathy or bursitis is a common cause of hip pain. The previously reported magnetic resonance (MR) findings of trochanteric tendinopathy and bursitis are peritrochanteric fluid and abductor tendon abnormality. We have often noted peritrochanteric high T2 signal in patients without trochanteric symptoms. The purpose of this study was to determine whether the MR findings of peritrochanteric fluid or hip abductor tendon pathology correlate with trochanteric pain. We retrospectively reviewed 131 consecutive MR examinations of the pelvis (256 hips) for T2 peritrochanteric signal and abductor tendon abnormalities without knowledge of the clinical symptoms. Any T2 peritrochanteric abnormality was characterized by size as tiny, small, medium, or large; by morphology as feathery, crescentic, or round; and by location as bursal or intratendinous. The clinical symptoms of hip pain and trochanteric pain were compared to the MR findings on coronal, sagittal, and axial T2 sequences using chi-square or Fisher’s exact test with significance assigned as p < 0.05. Clinical symptoms of trochanteric pain syndrome were present in only 16 of the 256 hips. All 16 hips with trochanteric pain and 212 (88%) of 240 without trochanteric pain had peritrochanteric abnormalities (p = 0.15). Eighty-eight percent of hips with trochanteric symptoms had gluteus tendinopathy while 50% of those without symptoms had such findings (p = 0.004). Other than tendinopathy, there was no statistically significant difference between hips with or without trochanteric symptoms and the presence of peritrochanteric T2 abnormality, its size or shape, and the presence of gluteus medius or minimus partial thickness tears. Patients with trochanteric pain syndrome always have peritrochanteric T2 abnormalities and are significantly more likely to have abductor tendinopathy on magnetic resonance imaging (MRI). However, although the absence of peritrochanteric T2 MR abnormalities makes trochanteric pain syndrome unlikely, detection of these abnormalities on MRI is a poor predictor of trochanteric pain syndrome as these findings are present in a high percentage of patients without trochanteric pain.",
"title": ""
},
{
"docid": "8aa305f217314d60ed6c9f66d20a7abf",
"text": "The circadian timing system drives daily rhythmic changes in drug metabolism and controls rhythmic events in cell cycle, DNA repair, apoptosis, and angiogenesis in both normal tissue and cancer. Rodent and human studies have shown that the toxicity and anticancer activity of common cancer drugs can be significantly modified by the time of administration. Altered sleep/activity rhythms are common in cancer patients and can be disrupted even more when anticancer drugs are administered at their most toxic time. Disruption of the sleep/activity rhythm accelerates cancer growth. The complex circadian time-dependent connection between host, cancer and therapy is further impacted by other factors including gender, inter-individual differences and clock gene polymorphism and/or down regulation. It is important to take circadian timing into account at all stages of new drug development in an effort to optimize the therapeutic index for new cancer drugs. Better measures of the individual differences in circadian biology of host and cancer are required to further optimize the potential benefit of chronotherapy for each individual patient.",
"title": ""
},
{
"docid": "9164dab8c4c55882f8caecc587c32eb1",
"text": "We suggest an approach to exploratory analysis of diverse types of spatiotemporal data with the use of clustering and interactive visual displays. We can apply the same generic clustering algorithm to different types of data owing to the separation of the process of grouping objects from the process of computing distances between the objects. In particular, we apply the densitybased clustering algorithm OPTICS to events (i.e. objects having spatial and temporal positions), trajectories of moving entities, and spatial distributions of events or moving entities in different time intervals. Distances are computed in a specific way for each type of objects; moreover, it may be useful to have several different distance functions for the same type of objects. Thus, multiple distance functions available for trajectories support different analysis tasks. We demonstrate the use of our approach by example of two datasets from the VAST Challenge 2008: evacuation traces (trajectories of moving entities) and landings and interdictions of migrant boats (events).",
"title": ""
},
{
"docid": "052a83669b39822eda51f2e7222074b4",
"text": "A class-E synchronous rectifier has been designed and implemented using 0.13-μm CMOS technology. A design methodology based on the theory of time-reversal duality has been used where a class-E amplifier circuit is transformed into a class-E rectifier circuit. The methodology is distinctly different from other CMOS RF rectifier designs which use voltage multiplier techniques. Power losses in the rectifier are analyzed including saturation resistance in the switch, inductor losses, and current/voltage overlap losses. The rectifier circuit includes a 50-Ω single-ended RF input port with on-chip matching. The circuit is self-biased and completely powered from the RF input signal. Experimental results for the rectifier show a peak RF-to-dc conversion efficiency of 30% measured at a frequency of 2.4 GHz.",
"title": ""
},
{
"docid": "0bcff493580d763dbc1dd85421546201",
"text": "The development of powerful imaging tools, editing images for changing their data content is becoming a mark to undertake. Tempering image contents by adding, removing, or copying/moving without leaving a trace or unable to be discovered by the investigation is an issue in the computer forensic world. The protection of information shared on the Internet like images and any other con?dential information is very signi?cant. Nowadays, forensic image investigation tools and techniques objective is to reveal the tempering strategies and restore the firm belief in the reliability of digital media. This paper investigates the challenges of detecting steganography in computer forensics. Open source tools were used to analyze these challenges. The experimental investigation focuses on using steganography applications that use same algorithms to hide information exclusively within an image. The research finding denotes that, if a certain steganography tool A is used to hide some information within a picture, and then tool B which uses the same procedure would not be able to recover the embedded image.",
"title": ""
},
{
"docid": "a0d34b1c003b7e88c2871deaaba761ed",
"text": "Sentence simplification aims to make sentences easier to read and understand. Most recent approaches draw on insights from machine translation to learn simplification rewrites from monolingual corpora of complex and simple sentences. We address the simplification problem with an encoder-decoder model coupled with a deep reinforcement learning framework. Our model, which we call DRESS (as shorthand for Deep REinforcement Sentence Simplification), explores the space of possible simplifications while learning to optimize a reward function that encourages outputs which are simple, fluent, and preserve the meaning of the input. Experiments on three datasets demonstrate that our model outperforms competitive simplification systems.1",
"title": ""
},
{
"docid": "7e78dd27dd2d4da997ceef7e867b7cd2",
"text": "Extracting facial feature is a key step in facial expression recognition (FER). Inaccurate feature extraction very often results in erroneous categorizing of facial expressions. Especially in robotic application, environmental factors such as illumination variation may cause FER system to extract feature inaccurately. In this paper, we propose a robust facial feature point extraction method to recognize facial expression in various lighting conditions. Before extracting facial features, a face is localized and segmented from a digitized image frame. Face preprocessing stage consists of face normalization and feature region localization steps to extract facial features efficiently. As regions of interest corresponding to relevant features are determined, Gabor jets are applied based on Gabor wavelet transformation to extract the facial points. Gabor jets are more invariable and reliable than gray-level values, which suffer from ambiguity as well as illumination variation while representing local features. Each feature point can be matched by a phase-sensitivity similarity function in the relevant regions of interest. Finally, the feature values are evaluated from the geometric displacement of facial points. After tested using the AR face database and the database built in our lab, average facial expression recognition rates of 84.1% and 81.3% are obtained respectively.",
"title": ""
},
{
"docid": "be29160b73b9ab727eb760a108a7254a",
"text": "Two-dimensional (2-D) analytical permanent-magnet (PM) eddy-current loss calculations are presented for slotless PM synchronous machines (PMSMs) with surface-inset PMs considering the current penetration effect. In this paper, the term slotless implies that either the stator is originally slotted but the slotting effects are neglected or the stator is originally slotless. The analytical magnetic field distribution is computed in polar coordinates from the 2-D subdomain method (i.e., based on formal resolution of Maxwell's equation applied in subdomain). Based on the predicted magnetic field distribution, the eddy-currents induced in the PMs are analytically obtained and the PM eddy-current losses considering eddy-current reaction field are calculated. The analytical expressions can be used for slotless PMSMs with any number of phases and any form of current and overlapping winding distribution. The effects of stator slotting are neglected and the current density distribution is modeled by equivalent current sheets located on the slot opening. To evaluate the efficacy of the proposed technique, the 2-D PM eddy-current losses for two slotless PMSMs are analytically calculated and compared with those obtained by 2-D finite-element analysis (FEA). The effects of the rotor rotational speed and the initial rotor mechanical angular position are investigated. The analytical results are in good agreement with those obtained by the 2-D FEA.",
"title": ""
},
{
"docid": "136ed8dc00926ceec6d67b9ab35e8444",
"text": "This paper addresses the property requirements of repair materials for high durability performance for concrete structure repair. It is proposed that the high tensile strain capacity of High Performance Fiber Reinforced Cementitious Composites (HPFRCC) makes such materials particularly suitable for repair applications, provided that the fresh properties are also adaptable to those required in placement techniques in typical repair applications. A specific version of HPFRCC, known as Engineered Cementitious Composites (ECC), is described. It is demonstrated that the fresh and hardened properties of ECC meet many of the requirements for durable repair performance. Recent experience in the use of this material in a bridge deck patch repair is highlighted. The origin of this article is a summary of a keynote lecture with the same title given at the Conference on Fiber Composites, High-Performance Concretes and Smart Materials, Chennai, India, Jan., 2004. It is only slightly updated here.",
"title": ""
},
{
"docid": "d7eb92756c8c3fb0ab49d7b101d96343",
"text": "Pretraining with language modeling and related unsupervised tasks has recently been shown to be a very effective enabling technology for the development of neural network models for language understanding tasks. In this work, we show that although language model-style pretraining is extremely effective at teaching models about language, it does not yield an ideal starting point for efficient transfer learning. By supplementing language model-style pretraining with further training on data-rich supervised tasks, we are able to achieve substantial additional performance improvements across the nine target tasks in the GLUE benchmark. We obtain an overall score of 76.9 on GLUE—a 2.3 point improvement over our baseline system adapted from Radford et al. (2018) and a 4.1 point improvement over Radford et al.’s reported score. We further use training data downsampling to show that the benefits of this supplementary training are even more pronounced in data-constrained regimes.",
"title": ""
},
{
"docid": "0bf150f6cd566c31ec840a57d8d2fa55",
"text": "Within the past few years, organizations in diverse industries have adopted MapReduce-based systems for large-scale data processing. Along with these new users, important new workloads have emerged which feature many small, short, and increasingly interactive jobs in addition to the large, long-running batch jobs for which MapReduce was originally designed. As interactive, large-scale query processing is a strength of the RDBMS community, it is important that lessons from that field be carried over and applied where possible in this new domain. However, these new workloads have not yet been described in the literature. We fill this gap with an empirical analysis of MapReduce traces from six separate business-critical deployments inside Facebook and at Cloudera customers in e-commerce, telecommunications, media, and retail. Our key contribution is a characterization of new MapReduce workloads which are driven in part by interactive analysis, and which make heavy use of querylike programming frameworks on top of MapReduce. These workloads display diverse behaviors which invalidate prior assumptions about MapReduce such as uniform data access, regular diurnal patterns, and prevalence of large jobs. A secondary contribution is a first step towards creating a TPC-like data processing benchmark for MapReduce.",
"title": ""
},
{
"docid": "ef4272cd4b0d4df9aa968cc9ff528c1e",
"text": "Estimating action quality, the process of assigning a \"score\" to the execution of an action, is crucial in areas such as sports and health care. Unlike action recognition, which has millions of examples to learn from, the action quality datasets that are currently available are small-typically comprised of only a few hundred samples. This work presents three frameworks for evaluating Olympic sports which utilize spatiotemporal features learned using 3D convolutional neural networks (C3D) and perform score regression with i) SVR ii) LSTM and iii) LSTM followed by SVR. An efficient training mechanism for the limited data scenarios is presented for clip-based training with LSTM. The proposed systems show significant improvement over existing quality assessment approaches on the task of predicting scores of diving, vault, figure skating. SVR-based frameworks yield better results, LSTM-based frameworks are more natural for describing an action and can be used for improvement feedback.",
"title": ""
},
{
"docid": "d8befc5eb47ac995e245cf9177c16d3d",
"text": "Our hypothesis is that the video game industry, in the attempt to simulate a realistic experience, has inadvertently collected very accurate data which can be used to solve problems in the real world. In this paper we describe a novel approach to soccer match prediction that makes use of only virtual data collected from a video game(FIFA 2015). Our results were comparable and in some places better than results achieved by predictors that used real data. We also use the data provided for each player and the players present in the squad, to analyze the team strategy. Based on our analysis, we were able to suggest better strategies for weak teams",
"title": ""
},
{
"docid": "eba545eb04c950ecd9462558c9d3da85",
"text": "The ability to recognize facial expressions automatically enables novel applications in human-computer interaction and other areas. Consequently, there has been active research in this field, with several recent works utilizing Convolutional Neural Networks (CNNs) for feature extraction and inference. These works differ significantly in terms of CNN architectures and other factors. Based on the reported results alone, the performance impact of these factors is unclear. In this paper, we review the state of the art in image-based facial expression recognition using CNNs and highlight algorithmic differences and their performance impact. On this basis, we identify existing bottlenecks and consequently directions for advancing this research field. Furthermore, we demonstrate that overcoming one of these bottlenecks – the comparatively basic architectures of the CNNs utilized in this field – leads to a substantial performance increase. By forming an ensemble of modern deep CNNs, we obtain a FER2013 test accuracy of 75.2%, outperforming previous works without requiring auxiliary training data or face registration.",
"title": ""
},
{
"docid": "a31692667282fe92f2eefc63cd562c9e",
"text": "Multivariate data sets including hundreds of variables are increasingly common in many application areas. Most multivariate visualization techniques are unable to display such data effectively, and a common approach is to employ dimensionality reduction prior to visualization. Most existing dimensionality reduction systems focus on preserving one or a few significant structures in data. For many analysis tasks, however, several types of structures can be of high significance and the importance of a certain structure compared to the importance of another is often task-dependent. This paper introduces a system for dimensionality reduction by combining user-defined quality metrics using weight functions to preserve as many important structures as possible. The system aims at effective visualization and exploration of structures within large multivariate data sets and provides enhancement of diverse structures by supplying a range of automatic variable orderings. Furthermore it enables a quality-guided reduction of variables through an interactive display facilitating investigation of trade-offs between loss of structure and the number of variables to keep. The generality and interactivity of the system is demonstrated through a case scenario.",
"title": ""
}
] | scidocsrr |
0e6d0376110dc8b335378bf8b498dfca | Measuring the Effect of Conversational Aspects on Machine Translation Quality | [
{
"docid": "355d040cf7dd706f08ef4ce33d53a333",
"text": "Conversational participants tend to immediately and unconsciously adapt to each other’s language styles: a speaker will even adjust the number of articles and other function words in their next utterance in response to the number in their partner’s immediately preceding utterance. This striking level of coordination is thought to have arisen as a way to achieve social goals, such as gaining approval or emphasizing difference in status. But has the adaptation mechanism become so deeply embedded in the language-generation process as to become a reflex? We argue that fictional dialogs offer a way to study this question, since authors create the conversations but don’t receive the social benefits (rather, the imagined characters do). Indeed, we find significant coordination across many families of function words in our large movie-script corpus. We also report suggestive preliminary findings on the effects of gender and other features; e.g., surprisingly, for articles, on average, characters adapt more to females than to males.",
"title": ""
},
{
"docid": "e8f431676ed0a85cb09a6462303a3ec7",
"text": "This paper describes Champollion, a lexicon-based sentence aligner designed for robust alignment of potential noisy parallel text. Champollion increases the robustness of the alignment by assigning greater weights to less frequent translated words. Experiments on a manually aligned Chinese – English parallel corpus show that Champollion achieves high precision and recall on noisy data. Champollion can be easily ported to new language pairs. It’s freely available to the public.",
"title": ""
}
] | [
{
"docid": "dcacbed90f45b76e9d40c427e16e89d6",
"text": "High torque density and low torque ripple are crucial for traction applications, which allow electrified powertrains to perform properly during start-up, acceleration, and cruising. High-quality anisotropic magnetic materials such as cold-rolled grain-oriented electrical steels can be used for achieving higher efficiency, torque density, and compactness in synchronous reluctance motors equipped with transverse laminated rotors. However, the rotor cylindrical geometry makes utilization of these materials with pole numbers higher than two more difficult. From a reduced torque ripple viewpoint, particular attention to the rotor slot pitch angle design can lead to improvements. This paper presents an innovative rotor lamination design and assembly using cold-rolled grain-oriented electrical steel to achieve higher torque density along with an algorithm for rotor slot pitch angle design for reduced torque ripple. The design methods and prototyping process are discussed, finite-element analyses and experimental examinations are carried out, and the results are compared to verify and validate the proposed methods.",
"title": ""
},
{
"docid": "991c5610152acf37b9a5e90b4f89bab8",
"text": "The BioTac® is a biomimetic tactile sensor for grip control and object characterization. It has three sensing modalities: thermal flux, microvibration and force. In this paper, we discuss feature extraction and interpretation of the force modality data. The data produced by this force sensing modality during sensor-object interaction are monotonic but non-linear. Algorithms and machine learning techniques were developed and validated for extracting the radius of curvature (ROC), point of application of force (PAF) and force vector (FV). These features have varying degrees of usefulness in extracting object properties using only cutaneous information; most robots can also provide the equivalent of proprioceptive sensing. For example, PAF and ROC is useful for extracting contact points for grasp and object shape as the finger depresses and moves along an object; magnitude of FV is useful in evaluating compliance from reaction forces when a finger is pushed into an object at a given velocity while direction is important for maintaining stable grip.",
"title": ""
},
{
"docid": "054b3f9068c92545e9c2c39e0728ad17",
"text": "Data Aggregation is an important topic and a suitable technique in reducing the energy consumption of sensors nodes in wireless sensor networks (WSN’s) for affording secure and efficient big data aggregation. The wireless sensor networks have been broadly applied, such as target tracking and environment remote monitoring. However, data can be easily compromised by a vast of attacks, such as data interception and tampering of data. Data integrity protection is proposed, gives an identity-based aggregate signature scheme for wireless sensor networks with a designated verifier. The aggregate signature scheme keeps data integrity, can reduce bandwidth and storage cost. Furthermore, the security of the scheme is effectively presented based on the computation of Diffie-Hellman random oracle model.",
"title": ""
},
{
"docid": "1389323613225897330d250e9349867b",
"text": "Description: The field of data mining lies at the confluence of predictive analytics, statistical analysis, and business intelligence. Due to the ever–increasing complexity and size of data sets and the wide range of applications in computer science, business, and health care, the process of discovering knowledge in data is more relevant than ever before. This book provides the tools needed to thrive in today s big data world. The author demonstrates how to leverage a company s existing databases to increase profits and market share, and carefully explains the most current data science methods and techniques. The reader will learn data mining by doing data mining . By adding chapters on data modelling preparation, imputation of missing data, and multivariate statistical analysis, Discovering Knowledge in Data, Second Edition remains the eminent reference on data mining .",
"title": ""
},
{
"docid": "b842d759b124e1da0240f977d95a8b9a",
"text": "In this paper we argue for a broader view of ontology patterns and therefore present different use-cases where drawbacks of the current declarative pattern languages can be seen. We also discuss usecases where a declarative pattern approach can replace procedural-coded ontology patterns. With previous work on an ontology pattern language in mind we argue for a general pattern language.",
"title": ""
},
{
"docid": "556dbae297d06aaaeb0fd78016bd573f",
"text": "This paper presents a learning and scoring framework based on neural networks for speaker verification. The framework employs an autoencoder as its primary structure while three factors are jointly considered in the objective function for speaker discrimination. The first one, relating to the sample reconstruction error, makes the structure essentially a generative model, which benefits to learn most salient and useful properties of the data. Functioning in the middlemost hidden layer, the other two attempt to ensure that utterances spoken by the same speaker are mapped into similar identity codes in the speaker discriminative subspace, where the dispersion of all identity codes are maximized to some extent so as to avoid the effect of over-concentration. Finally, the decision score of each utterance pair is simply computed by cosine similarity of their identity codes. Dealing with utterances represented by i-vectors, the results of experiments conducted on the male portion of the core task in the NIST 2010 Speaker Recognition Evaluation (SRE) significantly demonstrate the merits of our approach over the conventional PLDA method.",
"title": ""
},
{
"docid": "734ca5ac095cc8339056fede2a642909",
"text": "The value of depth-first search or \"bacltracking\" as a technique for solving problems is illustrated by two examples. An improved version of an algorithm for finding the strongly connected components of a directed graph and ar algorithm for finding the biconnected components of an undirect graph are presented. The space and time requirements of both algorithms are bounded by k1V + k2E dk for some constants kl, k2, and ka, where Vis the number of vertices and E is the number of edges of the graph being examined.",
"title": ""
},
{
"docid": "352bcf1c407568871880ad059053e1ec",
"text": "In this paper we present a novel system for sketching the motion of a character. The process begins by sketching a character to be animated. An animated motion is then created for the character by drawing a continuous sequence of lines, arcs, and loops. These are parsed and mapped to a parameterized set of output motions that further reflect the location and timing of the input sketch. The current system supports a repertoire of 18 different types of motions in 2D and a subset of these in 3D. The system is unique in its use of a cursive motion specification, its ability to allow for fast experimentation, and its ease of use for non-experts.",
"title": ""
},
{
"docid": "4019beb9fa6ec59b4b19c790fe8ff832",
"text": "R. Cropanzano, D. E. Rupp, and Z. S. Byrne (2003) found that emotional exhaustion (i.e., 1 dimension of burnout) negatively affects organizational citizenship behavior (OCB). The authors extended this research by investigating relationships among 3 dimensions of burnout (emotional exhaustion, depersonalization, and diminished personal accomplishment) and OCB. They also affirmed the mediating effect of job involvement on these relationships. Data were collected from 296 paired samples of service employees and their supervisors from 12 hotels and restaurants in Taiwan. Findings demonstrated that emotional exhaustion and diminished personal accomplishment were related negatively to OCB, whereas depersonalization had no independent effect on OCB. Job involvement mediated the relationships among emotional exhaustion, diminished personal accomplishment, and OCB.",
"title": ""
},
{
"docid": "ad401a35f367fabf31b35586bc1d10c4",
"text": "This paper describes a small-size buck-type dc–dc converter for cellular phones. Output power MOSFETs and control circuitry are monolithically integrated. The newly developed pulse frequency modulation control integrated circuit, mounted on a planar inductor within the converter package, has a low quiescent current below 10 μA and a small chip size of 1.4 mm × 1.1 mm in a 0.35-μm CMOS process. The converter achieves a maximum efficiency of 90% and a power density above 100 W/cm<formula formulatype=\"inline\"> <tex Notation=\"TeX\">$^3$</tex></formula>.",
"title": ""
},
{
"docid": "c3112126fa386710fb478dcfe978630e",
"text": "In recent years, distributed intelligent microelectromechanical systems (DiMEMSs) have appeared as a new form of distributed embedded systems. DiMEMSs contain thousands or millions of removable autonomous devices, which will collaborate with each other to achieve the final target of the whole system. Programming such systems is becoming an extremely difficult problem. The difficulty is due not only to their inherent nature of distributed collaboration, mobility, large scale, and limited resources of their devices (e.g., in terms of energy, memory, communication, and computation) but also to the requirements of real-time control and tolerance for uncertainties such as inaccurate actuation and unreliable communications. As a result, existing programming languages for traditional distributed and embedded systems are not suitable for DiMEMSs. In this article, we first introduce the origin and characteristics of DiMEMSs and then survey typical implementations of DiMEMSs and related research hotspots. Finally, we propose a real-time programming framework that can be used to design new real-time programming languages for DiMEMSs. The framework is composed of three layers: a real-time programming model layer, a compilation layer, and a runtime system layer. The design challenges and requirements of these layers are investigated. The framework is then discussed in further detail and suggestions for future research are given.",
"title": ""
},
{
"docid": "2f4a4c223c13c4a779ddb546b3e3518c",
"text": "Machine learning systems trained on user-provided data are susceptible to data poisoning attacks, whereby malicious users inject false training data with the aim of corrupting the learned model. While recent work has proposed a number of attacks and defenses, little is understood about the worst-case loss of a defense in the face of a determined attacker. We address this by constructing approximate upper bounds on the loss across a broad family of attacks, for defenders that first perform outlier removal followed by empirical risk minimization. Our approximation relies on two assumptions: (1) that the dataset is large enough for statistical concentration between train and test error to hold, and (2) that outliers within the clean (nonpoisoned) data do not have a strong effect on the model. Our bound comes paired with a candidate attack that often nearly matches the upper bound, giving us a powerful tool for quickly assessing defenses on a given dataset. Empirically, we find that even under a simple defense, the MNIST-1-7 and Dogfish datasets are resilient to attack, while in contrast the IMDB sentiment dataset can be driven from 12% to 23% test error by adding only 3% poisoned data.",
"title": ""
},
{
"docid": "74373dd009fc6285b8f43516d8e8bf2c",
"text": "Computational speech reconstruction algorithms have the ultimate aim of returning natural sounding speech to aphonic and dysphonic patients as well as those who can only whisper. In particular, individuals who have lost glottis function due to disease or surgery, retain the power of vocal tract modulation to some degree but they are unable to speak anything more than hoarse whispers without prosthetic aid. While whispering can be seen as a natural and secondary aspect of speech communications for most people, it becomes the primary mechanism of communications for those who have impaired voice production mechanisms, such as laryngectomees. In this paper, by considering the current limitations of speech reconstruction methods, a novel algorithm for converting whispers to normal speech is proposed and the efficiency of the algorithm is explored. The algorithm relies upon cascading mapping models and makes use of artificially generated whispers (called whisperised speech) to regenerate natural phonated speech from whispers. Using a training-based approach, the mapping models exploit whisperised speech to overcome frame to frame time alignment problems that are inherent in the speech reconstruction process. This algorithm effectively regenerates missing information in the conventional frameworks of phonated speech reconstruction, ∗Corresponding author Email address: [email protected] (Hamid R. Sharifzadeh) Preprint submitted to Journal of Computers & Electrical Engineering February 15, 2016 and is able to outperform the current state-of-the-art regeneration methods using both subjective and objective criteria.",
"title": ""
},
{
"docid": "bab429bf74fe4ce3f387a716964a867f",
"text": "We propose a new regularization method based on virtual adversarial loss: a new measure of local smoothness of the conditional label distribution given input. Virtual adversarial loss is defined as the robustness of the conditional label distribution around each input data point against local perturbation. Unlike adversarial training, our method defines the adversarial direction without label information and is hence applicable to semi-supervised learning. Because the directions in which we smooth the model are only \"virtually\" adversarial, we call our method virtual adversarial training (VAT). The computational cost of VAT is relatively low. For neural networks, the approximated gradient of virtual adversarial loss can be computed with no more than two pairs of forward- and back-propagations. In our experiments, we applied VAT to supervised and semi-supervised learning tasks on multiple benchmark datasets. With a simple enhancement of the algorithm based on the entropy minimization principle, our VAT achieves state-of-the-art performance for semi-supervised learning tasks on SVHN and CIFAR-10.",
"title": ""
},
{
"docid": "715fda02bad1633be9097cc0a0e68c8d",
"text": "Data uncertainty is common in real-world applications due to various causes, including imprecise measurement, network latency, outdated sources and sampling errors. These kinds of uncertainty have to be handled cautiously, or else the mining results could be unreliable or even wrong. In this paper, we propose a new rule-based classification and prediction algorithm called uRule for classifying uncertain data. This algorithm introduces new measures for generating, pruning and optimizing rules. These new measures are computed considering uncertain data interval and probability distribution function. Based on the new measures, the optimal splitting attribute and splitting value can be identified and used for classification and prediction. The proposed uRule algorithm can process uncertainty in both numerical and categorical data. Our experimental results show that uRule has excellent performance even when data is highly uncertain.",
"title": ""
},
{
"docid": "7f82ff12310f74b17ba01cac60762a8c",
"text": "For worst case parameter mismatch, modest levels of unbalance are predicted through the use of minimum gate decoupling, dynamic load lines with high Q values, common source inductance or high yield screening. Each technique is evaluated in terms of current unbalance, transition energy, peak turn-off voltage and parasitic oscillations, as appropriate, for various pulse duty cycles and frequency ranges.",
"title": ""
},
{
"docid": "edcf1cb4d09e0da19c917eab9eab3b23",
"text": "The paper describes a computerized process of myocardial perfusion diagnosis from cardiac single proton emission computed tomography (SPECT) images using data mining and knowledge discovery approach. We use a six-step knowledge discovery process. A database consisting of 267 cleaned patient SPECT images (about 3000 2D images), accompanied by clinical information and physician interpretation was created first. Then, a new user-friendly algorithm for computerizing the diagnostic process was designed and implemented. SPECT images were processed to extract a set of features, and then explicit rules were generated, using inductive machine learning and heuristic approaches to mimic cardiologist's diagnosis. The system is able to provide a set of computer diagnoses for cardiac SPECT studies, and can be used as a diagnostic tool by a cardiologist. The achieved results are encouraging because of the high correctness of diagnoses.",
"title": ""
},
{
"docid": "6a2562987d10cdc499aca15da5526ebf",
"text": "The underwater images usually suffers from non-uniform lighting, low contrast, blur and diminished colors. In this paper, we proposed an image based preprocessing technique to enhance the quality of the underwater images. The proposed technique comprises a combination of four filters such as homomorphic filtering, wavelet denoising, bilateral filter and contrast equalization. These filters are applied sequentially on degraded underwater images. The literature survey reveals that image based preprocessing algorithms uses standard filter techniques with various combinations. For smoothing the image, the image based preprocessing algorithms uses the anisotropic filter. The main drawback of the anisotropic filter is that iterative in nature and computation time is high compared to bilateral filter. In the proposed technique, in addition to other three filters, we employ a bilateral filter for smoothing the image. The experimentation is carried out in two stages. In the first stage, we have conducted various experiments on captured images and estimated optimal parameters for bilateral filter. Similarly, optimal filter bank and optimal wavelet shrinkage function are estimated for wavelet denoising. In the second stage, we conducted the experiments using estimated optimal parameters, optimal filter bank and optimal wavelet shrinkage function for evaluating the proposed technique. We evaluated the technique using quantitative based criteria such as a gradient magnitude histogram and Peak Signal to Noise Ratio (PSNR). Further, the results are qualitatively evaluated based on edge detection results. The proposed technique enhances the quality of the underwater images and can be employed prior to apply computer vision techniques.",
"title": ""
},
{
"docid": "b09c438933e0c9300e19f035eb0e9305",
"text": "A Reverse Conducting IGBT (RC-IGBT) is a promising device to reduce a size and cost of the power module thanks to the integration of IGBT and FWD into a single chip. However, it is difficult to achieve well-balanced performance between IGBT and FWD. Indeed, the total inverter loss of the conventional RC-IGBT was not so small as the individual IGBT and FWD pair. To minimize the loss, the most important key is the improvement of reverse recovery characteristics of FWD. We carefully extracted five effective parameters to improve the FWD characteristics, and investigated the impact of these parameters by using simulation and experiments. Finally, optimizing these parameters, we succeeded in fabricating the second-generation 600V class RC-IGBT with a smaller FWD loss than the first-generation RC-IGBT.",
"title": ""
},
{
"docid": "3b05b099ee7e043c43270e92ba5290bd",
"text": "In connection with a study of various aspects of the modifiability of behavior in the dancing mouse a need for definite knowledge concerning the relation of strength of stimulus to rate of learning arose. It was for the purpose of obtaining this knowledge that we planned and executed the experiments which are now to be described. Our work was greatly facilitated by the advice and assistance of Doctor E. G. MARTIN, Professor G. W. PIERCE, and Professor A. E. KENNELLY, and we desire to express here both our indebtedness and our thanks for their generous services.",
"title": ""
}
] | scidocsrr |
83ff51ddc5d8764e9fc199434ce90fa4 | UTOPIAN: User-Driven Topic Modeling Based on Interactive Nonnegative Matrix Factorization | [
{
"docid": "fce925493fc9f7cbbe4c202e5e625605",
"text": "Topic models are a useful and ubiquitous tool for understanding large corpora. However, topic models are not perfect, and for many users in computational social science, digital humanities, and information studies—who are not machine learning experts—existing models and frameworks are often a “take it or leave it” proposition. This paper presents a mechanism for giving users a voice by encoding users’ feedback to topic models as correlations between words into a topic model. This framework, interactive topic modeling (itm), allows untrained users to encode their feedback easily and iteratively into the topic models. Because latency in interactive systems is crucial, we develop more efficient inference algorithms for tree-based topic models. We validate the framework both with simulated and real users.",
"title": ""
},
{
"docid": "9e0f3f1ec7b54c5475a0448da45e4463",
"text": "Significant effort has been devoted to designing clustering algorithms that are responsive to user feedback or that incorporate prior domain knowledge in the form of constraints. However, users desire more expressive forms of interaction to influence clustering outcomes. In our experiences working with diverse application scientists, we have identified an interaction style scatter/gather clustering that helps users iteratively restructure clustering results to meet their expectations. As the names indicate, scatter and gather are dual primitives that describe whether clusters in a current segmentation should be broken up further or, alternatively, brought back together. By combining scatter and gather operations in a single step, we support very expressive dynamic restructurings of data. Scatter/gather clustering is implemented using a nonlinear optimization framework that achieves both locality of clusters and satisfaction of user-supplied constraints. We illustrate the use of our scatter/gather clustering approach in a visual analytic application to study baffle shapes in the bat biosonar (ears and nose) system. We demonstrate how domain experts are adept at supplying scatter/gather constraints, and how our framework incorporates these constraints effectively without requiring numerous instance-level constraints.",
"title": ""
}
] | [
{
"docid": "bdb4aba2b34731ffdf3989d6d1186270",
"text": "In order to push the performance on realistic computer vision tasks, the number of classes in modern benchmark datasets has significantly increased in recent years. This increase in the number of classes comes along with increased ambiguity between the class labels, raising the question if top-1 error is the right performance measure. In this paper, we provide an extensive comparison and evaluation of established multiclass methods comparing their top-k performance both from a practical as well as from a theoretical perspective. Moreover, we introduce novel top-k loss functions as modifications of the softmax and the multiclass SVM losses and provide efficient optimization schemes for them. In the experiments, we compare on various datasets all of the proposed and established methods for top-k error optimization. An interesting insight of this paper is that the softmax loss yields competitive top-k performance for all k simultaneously. For a specific top-k error, our new top-k losses lead typically to further improvements while being faster to train than the softmax.",
"title": ""
},
{
"docid": "a81b4f234f126589165994bb1b2d844f",
"text": "Most social media commentary in the Arabic language space is made using unstructured non-grammatical slang Arabic language, presenting complex challenges for sentiment analysis and opinion extraction of online commentary and micro blogging data in this important domain. This paper provides a comprehensive analysis of the important research works in the field of Arabic sentiment analysis. An in-depth qualitative analysis of the various features of the research works is carried out and a summary of objective findings is presented. We used smoothness analysis to evaluate the percentage error in the performance scores reported in the studies from their linearly-projected values (smoothness) which is an estimate of the influence of the different approaches used by the authors on the performance scores obtained. To solve a bounding issue with the data as it was reported, we modified existing logarithmic smoothing technique and applied it to pre-process the performance scores before the analysis. Our results from the analysis have been reported and interpreted for the various performance parameters: accuracy, precision, recall and F-score. Keywords—Arabic Sentiment Analysis; Qualitative Analysis; Quantitative Analysis; Smoothness Analysis",
"title": ""
},
{
"docid": "2ae7d7272c2cf82a3488e0b83b13f694",
"text": "Valgus extension osteotomy (VGEO) is a salvage procedure for 'hinge abduction' in Perthes' disease. The indications for its use are pain and fixed deformity. Our study shows the clinical results at maturity of VGEO carried out in 48 children (51 hips) and the factors which influence subsequent remodelling of the hip. After a mean follow-up of ten years, total hip replacement has been carried out in four patients and arthrodesis in one. The average Iowa Hip Score in the remainder was 86 (54 to 100). Favourable remodelling of the femoral head was seen in 12 hips. This was associated with three factors at surgery; younger age (p = 0.009), the phase of reossification (p = 0.05) and an open triradiate cartilage (p = 0.0007). Our study has shown that, in the short term, VGEO relieves pain and corrects deformity; as growth proceeds it may produce useful remodelling in this worst affected subgroup of children with Perthes' disease.",
"title": ""
},
{
"docid": "2f6b4c5ff4f9fbb4a9f24efb4f42cfd2",
"text": "Painful acute cysts in the natal cleft or lower back, known as pilonidal sinus disease, are a severe burden to many younger patients. Although surgical intervention is the preferred first line treatment, postsurgical wound healing disturbances are frequently reported due to infection or other complications. Different treatment options of pilonidal cysts have been discussed in the literature, however, no standardised guideline for the postsurgical wound treatment is available. After surgery, a common recommended treatment to patients is rinsing the wound with clean water and dressing with a sterile compress. We present a case series of seven patients with wounds healing by secondary intention after surgical intervention of a pilonidal cyst. The average age of the patients was 40 years old. Of the seven patients, three had developed a wound healing disturbance, one wound had started to develop a fibrin coating and three were in a good condition. The applied wound care regimens comprised appropriate mechanical or autolytic debridement, rinsing with an antimicrobial solution, haemoglobin application, and primary and secondary dressings. In all seven cases a complete wound closure was achieved within an average of 76 days with six out of seven wounds achieving wound closure within 23-98 days. Aesthetic appearance was deemed excellent in five out of seven cases excellent and acceptable in one. Treatment of one case with a sustained healing disturbance did result in wound closure but with a poor aesthetic outcome and an extensive cicatrisation of the new tissue. Based on these results we recommend that to avoid healing disturbances of wounds healing by secondary intention after surgical pilonidal cyst intervention, an adequate wound care regime comprising appropriate wound debridement, rinsing, topically applied haemoglobin and adequate wound dressing is recommendable as early as possible after surgery.",
"title": ""
},
{
"docid": "c72e8982a13f43d8e3debda561f3cf41",
"text": "This paper presents AOP++, a generic aspect-oriented programming framework in C++. It successfully incorporates AOP with object-oriented programming as well as generic programming naturally in the framework of standard C++. It innovatively makes use of C++ templates to express pointcut expressions and match join points at compile time. It innovatively creates a full-fledged aspect weaver by using template metaprogramming techniques to perform aspect weaving. It is notable that AOP++ itself is written completely in standard C++, and requires no language extensions. With the help of AOP++, C++ programmers can facilitate AOP with only a little effort.",
"title": ""
},
{
"docid": "9902a306ff4c633f30f6d9e56aa8335c",
"text": "The bank director was pretty upset noticing Joe, the system administrator, spending his spare time playing Mastermind, an old useless game of the 70ies. He had fought the instinct of telling him how to better spend his life, just limiting to look at him in disgust long enough to be certain to be noticed. No wonder when the next day the director fell on his chair astonished while reading, on the newspaper, about a huge digital fraud on the ATMs of his bank, with millions of Euros stolen by a team of hackers all around the world. The article mentioned how the hackers had ‘played with the bank computers just like playing Mastermind’, being able to disclose thousands of user PINs during the one-hour lunch break. That precise moment, a second before falling senseless, he understood the subtle smile on Joe’s face the day before, while training at his preferred game, Mastermind.",
"title": ""
},
{
"docid": "fb33cb426377a2fdc2bc597ab59c0f78",
"text": "OBJECTIVES\nTo present a combination of clinical and histopathological criteria for diagnosing cheilitis glandularis (CG), and to evaluate the association between CG and squamous cell carcinoma (SCC).\n\n\nMATERIALS AND METHODS\nThe medical literature in English was searched from 1950 to 2010 and selected demographic data, and clinical and histopathological features of CG were retrieved and analysed.\n\n\nRESULTS\nA total of 77 cases have been published and four new cases were added to the collective data. The clinical criteria applied included the coexistence of multiple lesions and mucoid/purulent discharge, while the histopathological criteria included two or more of the following findings: sialectasia, chronic inflammation, mucous/oncocytic metaplasia and mucin in ducts. Only 47 (58.0%) cases involving patients with a mean age of 48.5 ± 20.3 years and a male-to-female ratio of 2.9:1 fulfilled the criteria. The lower lip alone was most commonly affected (70.2%). CG was associated with SCC in only three cases (3.5%) for which there was a clear aetiological factor for the malignancy.\n\n\nCONCLUSIONS\nThe proposed diagnostic criteria can assist in delineating true CG from a variety of lesions with a comparable clinical/histopathological presentation. CG in association with premalignant/malignant epithelial changes of the lower lip may represent secondary, reactive changes of the salivary glands.",
"title": ""
},
{
"docid": "3f5f3a31cbf45065ea82cf60140a8bf5",
"text": "This paper presents a nonholonomic path planning method, aiming at taking into considerations of curvature constraint, length minimization, and computational demand, for car-like mobile robot based on cubic spirals. The generated path is made up of at most five segments: at most two maximal-curvature cubic spiral segments with zero curvature at both ends in connection with up to three straight line segments. A numerically efficient process is presented to generate a Cartesian shortest path among the family of paths considered for a given pair of start and destination configurations. Our approach is resorted to minimization via linear programming over the sum of length of each path segment of paths synthesized based on minimal locomotion cubic spirals linking start and destination orientations through a selected intermediate orientation. The potential intermediate configurations are not necessarily selected from the symmetric mean circle for non-parallel start and destination orientations. The novelty of the presented path generation method based on cubic spirals is: (i) Practical: the implementation is straightforward so that the generation of feasible paths in an environment free of obstacles is efficient in a few milliseconds; (ii) Flexible: it lends itself to various generalizations: readily applicable to mobile robots capable of forward and backward motion and Dubins’ car (i.e. car with only forward driving capability); well adapted to the incorporation of other constraints like wall-collision avoidance encountered in robot soccer games; straightforward extension to planning a path connecting an ordered sequence of target configurations in simple obstructed environment. © 2005 Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "41d338dd3a1d0b37e9050d0fcdb27569",
"text": "Loneliness and depression are associated, in particular in older adults. Less is known about the role of social networks in this relationship. The present study analyzes the influence of social networks in the relationship between loneliness and depression in the older adult population in Spain. A population-representative sample of 3535 adults aged 50 years and over from Spain was analyzed. Loneliness was assessed by means of the three-item UCLA Loneliness Scale. Social network characteristics were measured using the Berkman–Syme Social Network Index. Major depression in the previous 12 months was assessed with the Composite International Diagnostic Interview (CIDI). Logistic regression models were used to analyze the survey data. Feelings of loneliness were more prevalent in women, those who were younger (50–65), single, separated, divorced or widowed, living in a rural setting, with a lower frequency of social interactions and smaller social network, and with major depression. Among people feeling lonely, those with depression were more frequently married and had a small social network. Among those not feeling lonely, depression was associated with being previously married. In depressed people, feelings of loneliness were associated with having a small social network; while among those without depression, feelings of loneliness were associated with being married. The type and size of social networks have a role in the relationship between loneliness and depression. Increasing social interaction may be more beneficial than strategies based on improving maladaptive social cognition in loneliness to reduce the prevalence of depression among Spanish older adults.",
"title": ""
},
{
"docid": "d84bd9aecd5e5a5b744bbdbffddfd65f",
"text": "Mori (1970) proposed a hypothetical graph describing a nonlinear relation between a character’s degree of human likeness and the emotional response of the human perceiver. However, the index construction of these variables could result in their strong correlation, thus preventing rated characters from being plotted accurately. Phase 1 of this study tested the indices of the Godspeed questionnaire as measures of humanlike characters. The results indicate significant and strong correlations among the relevant indices (Bartneck, Kulić, Croft, & Zoghbi, 2009). Phase 2 of this study developed alternative indices with nonsignificant correlations (p > .05) between the proposed y-axis eeriness and x-axis perceived humanness (r = .02). The new humanness and eeriness indices facilitate plotting relations among rated characters of varying human likeness. 2010 Elsevier Ltd. All rights reserved. 1. Plotting emotional responses to humanlike characters Mori (1970) proposed a hypothetical graph describing a nonlinear relation between a character’s degree of human likeness and the emotional response of the human perceiver (Fig. 1). The graph predicts that more human-looking characters will be perceived as more agreeable up to a point at which they become so human people find their nonhuman imperfections unsettling (MacDorman, Green, Ho, & Koch, 2009; MacDorman & Ishiguro, 2006; Mori, 1970). This dip in appraisal marks the start of the uncanny valley (bukimi no tani in Japanese). As characters near complete human likeness, they rise out of the valley, and people once again feel at ease with them. In essence, a character’s imperfections expose a mismatch between the human qualities that are expected and the nonhuman qualities that instead follow, or vice versa. As an example of things that lie in the uncanny valley, Mori (1970) cites corpses, zombies, mannequins coming to life, and lifelike prosthetic hands. Assuming the uncanny valley exists, what dependent variable is appropriate to represent Mori’s graph? Mori referred to the y-axis as shinwakan, a neologism even in Japanese, which has been variously translated as familiarity, rapport, and comfort level. Bartneck, Kanda, Ishiguro, and Hagita (2009) have proposed using likeability to represent shinwakan, and they applied a likeability index to the evaluation of interactions with Ishiguro’s android double, the Geminoid HI-1. Likeability is virtually synonymous with interpersonal warmth (Asch, 1946; Fiske, Cuddy, & Glick, 2007; Rosenberg, Nelson, & Vivekananthan, 1968), which is also strongly correlated with other important measures, such as comfortability, communality, sociability, and positive (vs. negative) affect (Abele & Wojciszke, 2007; MacDorman, Ough, & Ho, 2007; Mehrabian & Russell, 1974; Sproull, Subramani, Kiesler, Walker, & Waters, 1996; Wojciszke, Abele, & Baryla, 2009). Warmth is the primary dimension of human social perception, accounting for 53% of the variance in perceptions of everyday social behaviors (Fiske, Cuddy, Glick, & Xu, 2002; Fiske et al., 2007; Wojciszke, Bazinska, & Jaworski, 1998). Despite the importance of warmth, this concept misses the essence of the uncanny valley. Mori (1970) refers to negative shinwakan as bukimi, which translates as eeriness. However, eeriness is not the negative anchor of warmth. A person can be cold and disagreeable without being eerie—at least not eerie in the way that an artificial human being is eerie. In addition, the set of negative emotions that predict eeriness (e.g., fear, anxiety, and disgust) are more specific than coldness (Ho, MacDorman, & Pramono, 2008). Thus, shinwakan and bukimi appear to constitute distinct dimensions. Although much has been written on potential benchmarks for anthropomorphic robots (for reviews see Kahn et al., 2007; MacDorman & Cowley, 2006; MacDorman & Kahn, 2007), no indices have been developed and empirically validated for measuring shinwakan or related concepts across a range of humanlike stimuli, such as computer-animated human characters and humanoid robots. The Godspeed questionnaire, compiled by Bartneck, Kulić, Croft, and Zoghbi (2009), includes at least two concepts, anthropomorphism and likeability, that could potentially serve as the xand y-axes of Mori’s graph (Bartneck, Kanda, et al., 2009). Although the 0747-5632/$ see front matter 2010 Elsevier Ltd. All rights reserved. doi:10.1016/j.chb.2010.05.015 * Corresponding author. Tel.: +1 317 215 7040. E-mail address: [email protected] (K.F. MacDorman). URL: http://www.macdorman.com (K.F. MacDorman). Computers in Human Behavior 26 (2010) 1508–1518",
"title": ""
},
{
"docid": "cbb6c80bc986b8b1e1ed3e70abb86a79",
"text": "CD44 is a cell surface adhesion receptor that is highly expressed in many cancers and regulates metastasis via recruitment of CD44 to the cell surface. Its interaction with appropriate extracellular matrix ligands promotes the migration and invasion processes involved in metastases. It was originally identified as a receptor for hyaluronan or hyaluronic acid and later to several other ligands including, osteopontin (OPN), collagens, and matrix metalloproteinases. CD44 has also been identified as a marker for stem cells of several types. Beside standard CD44 (sCD44), variant (vCD44) isoforms of CD44 have been shown to be created by alternate splicing of the mRNA in several cancer. Addition of new exons into the extracellular domain near the transmembrane of sCD44 increases the tendency for expressing larger size vCD44 isoforms. Expression of certain vCD44 isoforms was linked with progression and metastasis of cancer cells as well as patient prognosis. The expression of CD44 isoforms can be correlated with tumor subtypes and be a marker of cancer stem cells. CD44 cleavage, shedding, and elevated levels of soluble CD44 in the serum of patients is a marker of tumor burden and metastasis in several cancers including colon and gastric cancer. Recent observations have shown that CD44 intracellular domain (CD44-ICD) is related to the metastatic potential of breast cancer cells. However, the underlying mechanisms need further elucidation.",
"title": ""
},
{
"docid": "10b8223c9005bd5bdd2836d17541bbb1",
"text": "This study explores the stability of attachment security and representations from infancy to early adulthood in a sample chosen originally for poverty and high risk for poor developmental outcomes. Participants for this study were 57 young adults who are part of an ongoing prospective study of development and adaptation in a high-risk sample. Attachment was assessed during infancy by using the Ainsworth Strange Situation (Ainsworth & Wittig) and at age 19 by using the Berkeley Adult Attachment Interview (George, Kaplan, & Main). Possible correlates of continuity and discontinuity in attachment were drawn from assessments of the participants and their mothers over the course of the study. Results provided no evidence for significant continuity between infant and adult attachment in this sample, with many participants transitioning to insecurity. The evidence, however, indicated that there might be lawful discontinuity. Analyses of correlates of continuity and discontinuity in attachment classification from infancy to adulthood indicated that the continuous and discontinuous groups were differentiated on the basis of child maltreatment, maternal depression, and family functioning in early adolescence. These results provide evidence that although attachment has been found to be stable over time in other samples, attachment representations are vulnerable to difficult and chaotic life experiences.",
"title": ""
},
{
"docid": "f83228e2130f464b8c5b1837d338d7e1",
"text": "This article is focused on examining the factors and relationships that influence the browsing and buying behavior of individuals when they shop online. Specifically, we are interested in individual buyers using business-to-consumer sites. We are also interested in examining shopping preferences based on various demographic categories that might exhibit distinct purchasing attitudes and behaviors for certain categories of products and services. We examine these behaviors in the context of both products and services. After a period of decline in recent months, online shopping is on the rise again. By some estimates, total U.S. spending on online sales increased to $5.7 billion in December 2001 from $3.2 billion in June of 2001 [3, 5]. By these same estimates, the number of households shopping online increased to 18.7 million in December 2001 from 13.1 million in June 2001. Consumers spent an average of $304 per person in December 2001, compared with $247 in June 2001. According to an analyst at Forrester: “The fact that online retail remained stable during ... such social and economic instability speaks volumes about how well eCommerce is positioned to stand up to a poor economy” [4]. What do consumers utilize the Internet for? Nie and Erbring suggest that 52% of the consumers use the Internet for product information, 42% for travel information, and 24% for buying [9]. Recent online consumer behavior-related research refers to any Internet-related activity associated with the consumption of goods, services, and information [6]. In the definition of Internet consumption, Goldsmith and Bridges include “gathering information passively via exposure to advertising; shopping, which includes both browsing and deliberate information search, and the selection and buying of specific goods, services, and information” [7]. For the purposes of this study, we focus on all aspects of this consumption. We include all of them because information gathering aspects of e-commerce serve to educate the consumer, which is ulti-",
"title": ""
},
{
"docid": "b3d42332cd9572813bc08efc670d34d7",
"text": "Context: The use of Systematic Literature Review (SLR) requires expertise and poses many challenges for novice researchers. The experiences of those who have used this research methodology can benefit novice researchers in effectively dealing with these challenges. Objective: The aim of this study is to record the reported experiences of conducting Systematic Literature Reviews, for the benefit of new researchers. Such a review will greatly benefit the researchers wanting to conduct SLR for the very first time. Method: We conducted a tertiary study to gather the experiences published by researchers. Studies that have used the SLR research methodology in software engineering and have implicitly or explicitly reported their experiences are included in this review. Results: Our research has revealed 116 studies relevant to the theme. The data has been extracted by two researchers working independently and conflicts resolved after discussion with third researcher. Findings from these studies highlight Search Strategy, Online Databases, Planning and Data Extraction as the most challenging phases of SLR. Lack of standard terminology in software engineering papers, poor quality of abstracts and problems with search engines are some of the most cited challenges. Conclusion: Further research and guidelines is required to facilitate novice researchers in conducting these phases properly.",
"title": ""
},
{
"docid": "06b86a3d7f324fba7d95c358e0c38a8f",
"text": "Anomaly detection in streaming data is of high interest in numerous application domains. In this paper, we propose a novel one-class semi-supervised algorithm to detect anomalies in streaming data. Underlying the algorithm is a fast and accurate density estimator implemented by multiple fully randomized space trees (RS-Trees), named RS-Forest. The piecewise constant density estimate of each RS-tree is defined on the tree node into which an instance falls. Each incoming instance in a data stream is scored by the density estimates averaged over all trees in the forest. Two strategies, statistical attribute range estimation of high probability guarantee and dual node profiles for rapid model update, are seamlessly integrated into RS Forestto systematically address the ever-evolving nature of data streams. We derive the theoretical upper bound for the proposed algorithm and analyze its asymptotic properties via bias-variance decomposition. Empirical comparisons to the state-of-the-art methods on multiple benchmark datasets demonstrate that the proposed method features high detection rate, fast response, and insensitivity to most of the parameter settings. Algorithm implementations and datasets are available upon request.",
"title": ""
},
{
"docid": "4a9b82729bc4658bf2e54c90f74ea1c8",
"text": "To operate reliably in real-world traffic, an autonomous car must evaluate the consequences of its potential actions by anticipating the uncertain intentions of other traffic participants. This paper presents an integrated behavioral inference and decision-making approach that models vehicle behavior for both our vehicle and nearby vehicles as a discrete set of closedloop policies that react to the actions of other agents. Each policy captures a distinct high-level behavior and intention, such as driving along a lane or turning at an intersection. We first employ Bayesian changepoint detection on the observed history of states of nearby cars to estimate the distribution over potential policies that each nearby car might be executing. We then sample policies from these distributions to obtain high-likelihood actions for each participating vehicle. Through closed-loop forward simulation of these samples, we can evaluate the outcomes of the interaction of our vehicle with other participants (e.g., a merging vehicle accelerates and we slow down to make room for it, or the vehicle in front of ours suddenly slows down and we decide to pass it). Based on those samples, our vehicle then executes the policy with the maximum expected reward value. Thus, our system is able to make decisions based on coupled interactions between cars in a tractable manner. This work extends our previous multipolicy system [11] by incorporating behavioral anticipation into decision-making to evaluate sampled potential vehicle interactions. We evaluate our approach using real-world traffic-tracking data from our autonomous vehicle platform, and present decision-making results in simulation involving highway traffic scenarios.",
"title": ""
},
{
"docid": "328c1c6ed9e38a851c6e4fd3ab71c0f8",
"text": "We present the MSP-IMPROV corpus, a multimodal emotional database, where the goal is to have control over lexical content and emotion while also promoting naturalness in the recordings. Studies on emotion perception often require stimuli with fixed lexical content, but that convey different emotions. These stimuli can also serve as an instrument to understand how emotion modulates speech at the phoneme level, in a manner that controls for coarticulation. Such audiovisual data are not easily available from natural recordings. A common solution is to record actors reading sentences that portray different emotions, which may not produce natural behaviors. We propose an alternative approach in which we define hypothetical scenarios for each sentence that are carefully designed to elicit a particular emotion. Two actors improvise these emotion-specific situations, leading them to utter contextualized, non-read renditions of sentences that have fixed lexical content and convey different emotions. We describe the context in which this corpus was recorded, the key features of the corpus, the areas in which this corpus can be useful, and the emotional content of the recordings. The paper also provides the performance for speech and facial emotion classifiers. The analysis brings novel classification evaluations where we study the performance in terms of inter-evaluator agreement and naturalness perception, leveraging the large size of the audiovisual database.",
"title": ""
},
{
"docid": "e84ff3f37e049bd649a327366a4605f9",
"text": "Once thought of as a technology restricted primarily to the scientific community, High-performance Computing (HPC) has now been established as an important value creation tool for the enterprises. Predominantly, the enterprise HPC is fueled by the needs for high-performance data analytics (HPDA) and large-scale machine learning – trades instrumental to business growth in today’s competitive markets. Cloud computing, characterized by the paradigm of on-demand network access to computational resources, has great potential of bringing HPC capabilities to a broader audience. Clouds employing traditional lossy network technologies, however, at large, have not proved to be sufficient for HPC applications. Both the traditional HPC workloads and HPDA require high predictability, large bandwidths, and low latencies, features which combined are not readily available using best-effort cloud networks. On the other hand, lossless interconnection networks commonly deployed in HPC systems, lack the flexibility needed for dynamic cloud environments. In this thesis, we identify and address research challenges that hinder the realization of an efficient HPC cloud computing platform, utilizing the InfiniBand interconnect as a demonstration technology. In particular, we address challenges related to efficient routing, load-balancing, low-overhead virtualization, performance isolation, and fast network reconfiguration, all to improve the utilization and flexibility of the underlying interconnect of an HPC cloud. In addition, we provide a framework to realize a self-adaptive network architecture for HPC clouds, offering dynamic and autonomic adaptation of the underlying interconnect according to varying traffic patterns, resource availability, workload distribution, and also in accordance with service provider defined policies. The work presented in this thesis helps bridging the performance gap between the cloud and traditional HPC infrastructures; the thesis provides practical solutions to enable an efficient, flexible, multi-tenant HPC network suitable for high-performance cloud computing.",
"title": ""
},
{
"docid": "5a589c7beb17374e17c766634d822a80",
"text": "Images now come in different forms – color, near-infrared, depth, etc. – due to the development of special and powerful cameras in computer vision and computational photography. Their cross-modal correspondence establishment is however left behind. We address this challenging dense matching problem considering structure variation possibly existing in these image sets and introduce new model and solution. Our main contribution includes designing the descriptor named robust selective normalized cross correlation (RSNCC) to establish dense pixel correspondence in input images and proposing its mathematical parameterization to make optimization tractable. A computationally robust framework including global and local matching phases is also established. We build a multi-modal dataset including natural images with labeled sparse correspondence. Our method will benefit image and vision applications that require accurate image alignment.",
"title": ""
},
{
"docid": "e043f20a60df6399c2f93d064d61e648",
"text": "Recent research in recommender systems has shown that collaborative filtering algorithms are highly susceptible to attacks that insert biased profile data. Theoretical analyses and empirical experiments have shown that certain attacks can have a significant impact on the recommendations a system provides. These analyses have generally not taken into account the cost of mounting an attack or the degree of prerequisite knowledge for doing so. For example, effective attacks often require knowledge about the distribution of user ratings: the more such knowledge is required, the more expensive the attack to be mounted. In our research, we are examining a variety of attack models, aiming to establish the likely practical risks to collaborative systems. In this paper, we examine user-based collaborative filtering and some attack models that are successful against it, including a limited knowledge \"bandwagon\" attack that requires only that the attacker identify a small number of very popular items and a user-focused \"favorite item\" attack that is also effective against item-based algorithms.",
"title": ""
}
] | scidocsrr |
eacc5b915ce11792286986f305652163 | Fuzzy Filter Design for Nonlinear Systems in Finite-Frequency Domain | [
{
"docid": "239644f4ecd82758ca31810337a10fda",
"text": "This paper discusses a design of stable filters withH∞ disturbance attenuation of Takagi–Sugeno fuzzy systemswith immeasurable premise variables. When we consider the filter design of Takagi–Sugeno fuzzy systems, the selection of premise variables plays an important role. If the premise variable is the state of the system, then a fuzzy system describes a wide class of nonlinear systems. In this case, however, a filter design of fuzzy systems based on parallel distributed compensator idea is infeasible. To avoid such a difficulty, we consider the premise variables uncertainties. Then we consider a robust H∞ filtering problem for such an uncertain system. A solution of the problem is given in terms of linear matrix inequalities (LMIs). Some numerical examples are given to illustrate our theory. © 2008 Elsevier B.V. All rights reserved.",
"title": ""
}
] | [
{
"docid": "eaf16b3e9144426aed7edc092ad4a649",
"text": "In order to use a synchronous dynamic RAM (SDRAM) as the off-chip memory of an H.264/AVC encoder, this paper proposes an efficient SDRAM memory controller with an asynchronous bridge. With the proposed architecture, the SDRAM bandwidth is increased by making the operation frequency of an external SDRAM higher than that of the hardware accelerators of an H.264/AVC encoder. Experimental results show that the encoding speed is increased by 30.5% when the SDRAM clock frequency is increased from 100 MHz to 200 MHz while the H.264/AVC hardware accelerators operate at 100 MHz.",
"title": ""
},
{
"docid": "42127829aebaaaa4a4ac6c7e9417feaf",
"text": "The study was to compare treatment preference, efficacy, and tolerability of sildenafil citrate (sildenafil) and tadalafil for treating erectile dysfunction (ED) in Chinese men naοve to phosphodiesterase 5 (PDE5) inhibitor therapies. This multicenter, randomized, open-label, crossover study evaluated whether Chinese men with ED preferred 20-mg tadalafil or 100-mg sildenafil. After a 4 weeks baseline assessment, 383 eligible patients were randomized to sequential 20-mg tadalafil per 100-mg sildenafil or vice versa for 8 weeks respectively and then chose which treatment they preferred to take during the 8 weeks extension. Primary efficacy was measured by Question 1 of the PDE5 Inhibitor Treatment Preference Questionnaire (PITPQ). Secondary efficacy was analyzed by PITPQ Question 2, the International Index of Erectile Function (IIEF) erectile function (EF) domain, sexual encounter profile (SEP) Questions 2 and 3, and the Drug Attributes Questionnaire. Three hundred and fifty men (91%) completed the randomized treatment phase. Two hundred and forty-two per 350 (69.1%) patients preferred 20-mg tadalafil, and 108/350 (30.9%) preferred 100-mg sildenafil (P < 0.001) as their treatment in the 8 weeks extension. Ninety-two per 242 (38%) patients strongly preferred tadalafil and 37/108 (34.3%) strongly the preferred sildenafil. The SEP2 (penetration), SEP3 (successful intercourse), and IIEF-EF domain scores were improved in both tadalafil and sildenafil treatment groups. For patients who preferred tadalafil, getting an erection long after taking the medication was the most reported reason for tadalafil preference. The only treatment-emergent adverse event reported by > 2% of men was headache. After tadalafil and sildenafil treatments, more Chinese men with ED naοve to PDE5 inhibitor preferred tadalafil. Both sildenafil and tadalafil treatments were effective and safe.",
"title": ""
},
{
"docid": "5e952c10a30baffc511bb3ffe86cd4a8",
"text": "Chitin and its deacetylated derivative chitosan are natural polymers composed of randomly distributed -(1-4)linked D-glucosamine (deacetylated unit) and N-acetyl-D-glucosamine (acetylated unit). Chitin is insoluble in aqueous media while chitosan is soluble in acidic conditions due to the free protonable amino groups present in the D-glucosamine units. Due to their natural origin, both chitin and chitosan can not be defined as a unique chemical structure but as a family of polymers which present a high variability in their chemical and physical properties. This variability is related not only to the origin of the samples but also to their method of preparation. Chitin and chitosan are used in fields as different as food, biomedicine and agriculture, among others. The success of chitin and chitosan in each of these specific applications is directly related to deep research into their physicochemical properties. In recent years, several reviews covering different aspects of the applications of chitin and chitosan have been published. However, these reviews have not taken into account the key role of the physicochemical properties of chitin and chitosan in their possible applications. The aim of this review is to highlight the relationship between the physicochemical properties of the polymers and their behaviour. A functional characterization of chitin and chitosan regarding some biological properties and some specific applications (drug delivery, tissue engineering, functional food, food preservative, biocatalyst immobilization, wastewater treatment, molecular imprinting and metal nanocomposites) is presented. The molecular mechanism of the biological properties such as biocompatibility, mucoadhesion, permeation enhancing effect, anticholesterolemic, and antimicrobial has been up-",
"title": ""
},
{
"docid": "d258a14fc9e64ba612f2c8ea77f85d08",
"text": "In this paper we report exploratory analyses of high-density oligonucleotide array data from the Affymetrix GeneChip system with the objective of improving upon currently used measures of gene expression. Our analyses make use of three data sets: a small experimental study consisting of five MGU74A mouse GeneChip arrays, part of the data from an extensive spike-in study conducted by Gene Logic and Wyeth's Genetics Institute involving 95 HG-U95A human GeneChip arrays; and part of a dilution study conducted by Gene Logic involving 75 HG-U95A GeneChip arrays. We display some familiar features of the perfect match and mismatch probe (PM and MM) values of these data, and examine the variance-mean relationship with probe-level data from probes believed to be defective, and so delivering noise only. We explain why we need to normalize the arrays to one another using probe level intensities. We then examine the behavior of the PM and MM using spike-in data and assess three commonly used summary measures: Affymetrix's (i) average difference (AvDiff) and (ii) MAS 5.0 signal, and (iii) the Li and Wong multiplicative model-based expression index (MBEI). The exploratory data analyses of the probe level data motivate a new summary measure that is a robust multi-array average (RMA) of background-adjusted, normalized, and log-transformed PM values. We evaluate the four expression summary measures using the dilution study data, assessing their behavior in terms of bias, variance and (for MBEI and RMA) model fit. Finally, we evaluate the algorithms in terms of their ability to detect known levels of differential expression using the spike-in data. We conclude that there is no obvious downside to using RMA and attaching a standard error (SE) to this quantity using a linear model which removes probe-specific affinities.",
"title": ""
},
{
"docid": "90df69e590373e757523f4c92a841d5c",
"text": "A new impedance-based stability criterion was proposed for a grid-tied inverter system based on a Norton equivalent circuit of the inverter [18]. As an extension of the work in [18], this paper shows that using a Thévenin representation of the inverter can lead to the same criterion in [18]. Further, this paper shows that the criterion proposed by Middlebrook can still be used for the inverter systems. The link between the criterion in [18] and the original criterion is the inverse Nyquist stability criterion. The criterion in [18] is easier to be used. Because the current feedback controller and the phase-locked loop of the inverter introduce poles at the origin and right-half plane to the output impedance of the inverter. These poles do not appear in the minor loop gain defined in [18] but in the minor loop gain defined by Middlebrook. Experimental systems are used to verify the proposed analysis.",
"title": ""
},
{
"docid": "93e6194dc3d8922edb672ac12333ea82",
"text": "Sensors including RFID tags have been widely deployed for measuring environmental parameters such as temperature, humidity, oxygen concentration, monitoring the location and velocity of moving objects, tracking tagged objects, and many others. To support effective, efficient, and near real-time phenomena probing and objects monitoring, streaming sensor data have to be gracefully managed in an event processing manner. Different from the traditional events, sensor events come with temporal or spatio-temporal constraints and can be non-spontaneous. Meanwhile, like general event streams, sensor event streams can be generated with very high volumes and rates. Primitive sensor events need to be filtered, aggregated and correlated to generate more semantically rich complex events to facilitate the requirements of up-streaming applications. Motivated by such challenges, many new methods have been proposed in the past to support event processing in sensor event streams. In this chapter, we survey state-of-the-art research on event processing in sensor networks, and provide a broad overview of major topics in Springer Science+Business Media New York 2013 © Managing and Mining Sensor Data, DOI 10.1007/978-1-4614-6309-2_4, C.C. Aggarwal (ed.), 77 78 MANAGING AND MINING SENSOR DATA complex RFID event processing, including event specification languages, event detection models, event processing methods and their optimizations. Additionally, we have presented an open discussion on advanced issues such as processing uncertain and out-of-order sensor events.",
"title": ""
},
{
"docid": "26e79793addc4750dcacc0408764d1e1",
"text": "It has been shown that integration of acoustic and visual information especially in noisy conditions yields improved speech recognition results. This raises the question of how to weight the two modalities in different noise conditions. Throughout this paper we develop a weighting process adaptive to various background noise situations. In the presented recognition system, audio and video data are combined following a Separate Integration (SI) architecture. A hybrid Artificial Neural Network/Hidden Markov Model (ANN/HMM) system is used for the experiments. The neural networks were in all cases trained on clean data. Firstly, we evaluate the performance of different weighting schemes in a manually controlled recognition task with different types of noise. Next, we compare different criteria to estimate the reliability of the audio stream. Based on this, a mapping between the measurements and the free parameter of the fusion process is derived and its applicability is demonstrated. Finally, the possibilities and limitations of adaptive weighting are compared and discussed.",
"title": ""
},
{
"docid": "2d4cb6980cf8716699bdffca6cfed274",
"text": "Advances in laser technology have progressed so rapidly during the past decade that successful treatment of many cutaneous concerns and congenital defects, including vascular and pigmented lesions, tattoos, scars and unwanted haircan be achieved. The demand for laser surgery has increased as a result of the relative ease with low incidence of adverse postoperative sequelae. In this review, the currently available laser systems with cutaneous applications are outlined to identify the various types of dermatologic lasers available, to list their clinical indications and to understand the possible side effects.",
"title": ""
},
{
"docid": "2b310a05b6a0c0fae45a2e15f8d52101",
"text": "Cyber threats and the field of computer cyber defense are gaining more and more an increased importance in our lives. Starting from our regular personal computers and ending with thin clients such as netbooks or smartphones we find ourselves bombarded with constant malware attacks. In this paper we will present a new and novel way in which we can detect these kind of attacks by using elements of modern game theory. We will present the effects and benefits of game theory and we will talk about a defense exercise model that can be used to train cyber response specialists.",
"title": ""
},
{
"docid": "09085fc15308a96cd9441bb0e23e6c1a",
"text": "Convolutional neural networks (CNNs) are able to model local stationary structures in natural images in a multi-scale fashion, when learning all model parameters with supervision. While excellent performance was achieved for image classification when large amounts of labeled visual data are available, their success for unsupervised tasks such as image retrieval has been moderate so far.Our paper focuses on this latter setting and explores several methods for learning patch descriptors without supervision with application to matching and instance-level retrieval. To that effect, we propose a new family of patch representations, based on the recently introduced convolutional kernel networks. We show that our descriptor, named Patch-CKN, performs better than SIFT as well as other convolutional networks learned by artificially introducing supervision and is significantly faster to train. To demonstrate its effectiveness, we perform an extensive evaluation on standard benchmarks for patch and image retrieval where we obtain state-of-the-art results. We also introduce a new dataset called RomePatches, which allows to simultaneously study descriptor performance for patch and image retrieval.",
"title": ""
},
{
"docid": "5394df4e1d6f52a608bfdab8731da088",
"text": "For over a decade, researchers have devoted much effort to construct theoretical models, such as the Technology Acceptance Model (TAM) and the Expectation Confirmation Model (ECM) for explaining and predicting user behavior in IS acceptance and continuance. Another model, the Cognitive Model (COG), was proposed for continuance behavior; it combines some of the variables used in both TAM and ECM. This study applied the technique of structured equation modeling with multiple group analysis to compare the TAM, ECM, and COG models. Results indicate that TAM, ECM, and COG have quite different assumptions about the underlying constructs that dictate user behavior and thus have different explanatory powers. The six constructs in the three models were synthesized to propose a new Technology Continuance Theory (TCT). A major contribution of TCT is that it combines two central constructs: attitude and satisfaction into one continuance model, and has applicability for users at different stages of the adoption life cycle, i.e., initial, short-term and long-term users. The TCT represents a substantial improvement over the TAM, ECM and COG models in terms of both breadth of applicability and explanatory power.",
"title": ""
},
{
"docid": "e4b54824b2528b66e28e82ad7d496b36",
"text": "Objective: In this paper, we develop a personalized real-time risk scoring algorithm that provides timely and granular assessments for the clinical acuity of ward patients based on their (temporal) lab tests and vital signs; the proposed risk scoring system ensures timely intensive care unit admissions for clinically deteriorating patients. Methods: The risk scoring system is based on the idea of sequential hypothesis testing under an uncertain time horizon. The system learns a set of latent patient subtypes from the offline electronic health record data, and trains a mixture of Gaussian Process experts, where each expert models the physiological data streams associated with a specific patient subtype. Transfer learning techniques are used to learn the relationship between a patient's latent subtype and her static admission information (e.g., age, gender, transfer status, ICD-9 codes, etc). Results: Experiments conducted on data from a heterogeneous cohort of 6321 patients admitted to Ronald Reagan UCLA medical center show that our score significantly outperforms the currently deployed risk scores, such as the Rothman index, MEWS, APACHE, and SOFA scores, in terms of timeliness, true positive rate, and positive predictive value. Conclusion: Our results reflect the importance of adopting the concepts of personalized medicine in critical care settings; significant accuracy and timeliness gains can be achieved by accounting for the patients’ heterogeneity. Significance: The proposed risk scoring methodology can confer huge clinical and social benefits on a massive number of critically ill inpatients who exhibit adverse outcomes including, but not limited to, cardiac arrests, respiratory arrests, and septic shocks.",
"title": ""
},
{
"docid": "7a87ffc98d8bab1ff0c80b9e8510a17d",
"text": "We present a method for transferring neural representations from label-rich source domains to unlabeled target domains. Recent adversarial methods proposed for this task learn to align features across domains by fooling a special domain critic network. However, a drawback of this approach is that the critic simply labels the generated features as in-domain or not, without considering the boundaries between classes. This can lead to ambiguous features being generated near class boundaries, reducing target classification accuracy. We propose a novel approach, Adversarial Dropout Regularization (ADR), to encourage the generator to output more discriminative features for the target domain. Our key idea is to replace the critic with one that detects non-discriminative features, using dropout on the classifier network. The generator then learns to avoid these areas of the feature space and thus creates better features. We apply our ADR approach to the problem of unsupervised domain adaptation for image classification and semantic segmentation tasks, and demonstrate significant improvement over the state of the art. We also show that our approach can be used to train Generative Adversarial Networks for semi-supervised learning.",
"title": ""
},
{
"docid": "a39091796e8f679f246baa8dce08f213",
"text": "Resource scheduling in cloud is a challenging job and the scheduling of appropriate resources to cloud workloads depends on the QoS requirements of cloud applications. In cloud environment, heterogeneity, uncertainty and dispersion of resources encounters problems of allocation of resources, which cannot be addressed with existing resource allocation policies. Researchers still face troubles to select the efficient and appropriate resource scheduling algorithm for a specific workload from the existing literature of resource scheduling algorithms. This research depicts a broad methodical literature analysis of resource management in the area of cloud in general and cloud resource scheduling in specific. In this survey, standard methodical literature analysis technique is used based on a complete collection of 110 research papers out of large collection of 1206 research papers published in 19 foremost workshops, symposiums and conferences and 11 prominent journals. The current status of resource scheduling in cloud computing is distributed into various categories. Methodical analysis of resource scheduling in cloud computing is presented, resource scheduling algorithms and management, its types and benefits with tools, resource scheduling aspects and resource distribution policies are described. The literature concerning to thirteen types of resource scheduling algorithms has also been stated. Further, eight types of resource distribution policies are described. Methodical analysis of this research work will help researchers to find the important characteristics of resource scheduling algorithms and also will help to select most suitable algorithm for scheduling a specific workload. Future research directions have also been suggested in this research work.",
"title": ""
},
{
"docid": "048d54f4997bfea726f69cf7f030543d",
"text": "In this article, we have reviewed the state of the art of IPT systems and have explored the suitability of the technology to wirelessly charge battery powered vehicles. the review shows that the IPT technology has merits for stationary charging (when the vehicle is parked), opportunity charging (when the vehicle is stopped for a short period of time, for example, at a bus stop), and dynamic charging (when the vehicle is moving along a dedicated lane equipped with an IPT system). Dynamic wireless charging holds promise to partially or completely eliminate the overnight charging through a compact network of dynamic chargers installed on the roads that would keep the vehicle batteries charged at all times, consequently reducing the range anxiety and increasing the reliability of EVs. Dynamic charging can help lower the price of EVs by reducing the size of the battery pack. Indeed, if the recharging energy is readily available, the batteries do not have to support the whole driving range but only supply power when the IPT system is not available. Depending on the power capability, the use of dynamic charging may increase driving range and reduce the size of the battery pack.",
"title": ""
},
{
"docid": "ffb1610fddb36fa4db5fa3c3dc1e5fad",
"text": "The complex methodology of investigations was applied to study a movement structure on bench press. We have checked the usefulness of multimodular measuring system (SMART-E, BTS company, Italy) and a special device for tracking the position of barbell (pantograph). Software Smart Analyser was used to create a database allowing chosen parameters to be compared. The results from different measuring devices are very similar, therefore the replacement of many devices by one multimodular system is reasonable. In our study, the effect of increased barbell load on the values of muscles activity and bar kinematics during the flat bench press movement was clearly visible. The greater the weight of a barbell, the greater the myoactivity of shoulder muscles and vertical velocity of the bar. It was also confirmed the presence of the so-called sticking point (period) during the concentric phase of the bench press. In this study, the initial velocity of the barbell decreased (v(min)) not only under submaximal and maximal loads (90 and 100% of the one repetition maximum; 1-RM), but also under slightly lighter weights (70 and 80% of 1-RM).",
"title": ""
},
{
"docid": "51e2f490072820230d71f648d70babcb",
"text": "Classification and regression trees are becoming increasingly popular for partitioning data and identifying local structure in small and large datasets. Classification trees include those models in which the dependent variable (the predicted variable) is categorical. Regression trees include those in which it is continuous. This paper discusses pitfalls in the use of these methods and highlights where they are especially suitable. Paper presented at the 1992 Sun Valley, ID, Sawtooth/SYSTAT Joint Software Conference.",
"title": ""
},
{
"docid": "8bae8e7937f4c9a492a7030c62d7d9f4",
"text": "Although there is considerable interest in the advance bookings model as a forecasting method in the hotel industry, there has been little research analyzing the use of an advance booking curve in forecasting hotel reservations. The mainstream of advance booking models reviewed in the literature uses only the bookings-on-hand data on a certain day and ignores the previous booking data. This empirical study analyzes the entire booking data set for one year provided by the Hotel ICON in Hong Kong, and identifies the trends and patterns in the data. The analysis demonstrates the use of an advance booking curve in forecasting hotel reservations at property level.",
"title": ""
},
{
"docid": "717dd8e3c699d6cc22ba483002ab0a6f",
"text": "Our analysis of many real-world event based applications has revealed that existing Complex Event Processing technology (CEP), while effective for efficient pattern matching on event stream, is limited in its capability of reacting in realtime to opportunities and risks detected or environmental changes. We are the first to tackle this problem by providing active rule support embedded directly within the CEP engine, henceforth called Active Complex Event Processing technology, or short, Active CEP. We design the Active CEP model and associated rule language that allows rules to be triggered by CEP system state changes and correctly executed during the continuous query process. Moreover we design an Active CEP infrastructure, that integrates the active rule component into the CEP kernel, allowing finegrained and optimized rule processing. We demonstrate the power of Active CEP by applying it to the development of a collaborative project with UMass Medical School, which detects potential threads of infection and reminds healthcare workers to perform hygiene precautions in real-time. 1. BACKGROUND AND MOTIVATION Complex patterns of events often capture exceptions, threats or opportunities occurring across application space and time. Complex Event Processing (CEP) technology has thus increasingly gained popularity for efficiently detecting such event patterns in real-time. For example CEP has been employed by diverse applications ranging from healthcare systems , financial analysis , real-time business intelligence to RFID based surveillance. However, existing CEP technologies [3, 7, 2, 5], while effective for pattern matching, are limited in their capability of supporting active rules. We motivate the need for such capability based on our experience with the development of a real-world hospital infection control system, called HygieneReminder, or short HyReminder. Application: HyReminder. According to the U.S. Centers for Disease Control and Prevention [8], healthcareassociated infections hit 1.7 million people a year in the Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. To copy otherwise, to republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Articles from this volume were presented at The 36th International Conference on Very Large Data Bases, September 13-17, 2010, Singapore. Proceedings of the VLDB Endowment, Vol. 3, No. 2 Copyright 2010 VLDB Endowment 2150-8097/10/09... $ 10.00. United States, causing an estimated 99,000 deaths. HyReminder is a collaborated project between WPI and University of Massachusetts Medical School (UMMS) that uses advanced CEP technologies to solve this long-standing public health problem. HyReminder system aims to continuously track healthcare workers (HCW) for hygiene compliance (for example cleansing hands before entering a H1N1 patient’s room), and remind the HCW at the appropriate moments to perform hygiene precautions thus preventing spread of infections. CEP technologies are adopted to efficiently monitor event patterns, such as the sequence that a HCW left a patient room (this behavior is measured by a sensor reading and modeled as “exit” event), did not sanitize his hands (referred as “!sanitize”, where ! represents negation), and then entered another patient’s room (referred as “enter”). Such a sequence of behaviors, i.e. SEQ(exit,!sanitize,enter), would be deemed as a violation of hand hygiene regulations. Besides detecting complex events, the HyReminder system requires the ability to specify logic rules reminding HCWs to perform the respective appropriate hygiene upon detection of an imminent hand hygiene violation or an actual observed violation. A condensed version of example logic rules derived from HyReminder and modeled using CEP semantics is depicted in Figure 1. In the figure, the edge marked “Q1.1” expresses the logic that “if query Q1.1 is satisfied for a HCW, then change his hygiene status to warning and change his badge light to yellow”. This logic rule in fact specifies how the system should react to the observed change, here meaning the risk being detected by the continuous pattern matching query Q1.1, during the long running query process. The system’s streaming environment requires that such reactions be executed in a timely fashion. An additional complication arises in that the HCW status changed by this logic rule must be used as a condition by other continuous queries at run time, like Q2.1 and Q2.2. We can see that active rules and continuous queries over streaming data are tightly-coupled: continuous queries are monitoring the world while active rules are changing the world, both in real-time. Yet contrary to traditional databases, data is not persistently stored in a DSMS, but rather streamed through the system in fluctuating arrival rate. Thus processing active rules in CEP systems requires precise synchronization between queries and rules and careful consideration of latency and resource utilization. Limitations of Existing CEP Technology. In summary, the following active functionalities are needed by many event stream applications, but not supported by the existing",
"title": ""
}
] | scidocsrr |
93bc35b87540a4c67cdb45624d821210 | The Riemann Zeros and Eigenvalue Asymptotics | [
{
"docid": "d15a2f27112c6bd8bfa2f9c01471c512",
"text": "Assuming a special version of the Montgomery-Odlyzko law on the pair correlation of zeros of the Riemann zeta function conjectured by Rudnick and Sarnak and assuming the Riemann Hypothesis, we prove new results on the prime number theorem, difference of consecutive primes, and the twin prime conjecture. 1. Introduction. Assuming the Riemann Hypothesis (RH), let us denote by 1=2 ig a nontrivial zero of a primitive L-function L
s;p attached to an irreducible cuspidal automorphic representation of GLm; m ^ 1, over Q. When m 1, this L-function is the Riemann zeta function z
s or the Dirichlet L-function L
s; c for a primitive character c. Rudnick and Sarnak [13] examined the n-level correlation for these zeros and made a far reaching conjecture which is called the Montgomery [9]-Odlyzko [11], [12] Law by Katz and Sarnak [6]. Rudnick and Sarnak also proved a case of their conjecture when a test function f has its Fourier transform b f supported in a restricted region. In this article, we will show that a version of the above conjecture for the pair correlation of zeros of the zeta function z
s implies interesting arithmetical results on prime distribution (Theorems 2, 3, and 4). These results can give us deep insight on possible ultimate bounds of these prime distribution problems. One can also see that the pair (and nlevel) correlation of zeros of zeta and L-functions is a powerful method in number theory. Our computation shows that the test function f and the support of its Fourier transform b f play a crucial role in the conjecture. To see the conjecture in Rudnick and Sarnak [13] in the case of the zeta function z
s and n 2, the pair correlation, we use a test function f
x; y which satisfies the following three conditions: (i) f
x; y f
y; x for any x; y 2 R, (ii) f
x t; y t f
x; y for any t 2 R, and (iii) f
x; y tends to 0 rapidly as j
x; yj ! 1 on the hyperplane x y 0. Arch. Math. 76 (2001) 41±50 0003-889X/01/010041-10 $ 3.50/0 Birkhäuser Verlag, Basel, 2001 Archiv der Mathematik Mathematics Subject Classification (1991): 11M26, 11N05, 11N75. 1) Supported in part by China NNSF Grant # 19701019. 2) Supported in part by USA NSF Grant # DMS 97-01225. Define the function W2
x; y 1ÿ sin p
xÿ y
p
xÿ y : Denote the Dirac function by d
x which satisfies R d
xdx 1 and defines a distribution f 7! f
0. We then define the pair correlation sum of zeros gj of the zeta function: R2
T; f ; h P g1;g2 distinct h g1 T ; g2 T f Lg1 2p ; Lg2 2p ; where T ^ 2, L log T, and h
x; y is a localized cutoff function which tends to zero rapidly when j
x; yj tends to infinity. The conjecture proposed by Rudnick and Sarnak [13] is that R2
T; f ; h 1 2p TL
",
"title": ""
}
] | [
{
"docid": "e4dba25d2528a507e4b494977fd69fc0",
"text": "The illegal distribution of a digital movie is a common and significant threat to the film industry. With the advent of high-speed broadband Internet access, a pirated copy of a digital video can now be easily distributed to a global audience. A possible means of limiting this type of digital theft is digital video watermarking whereby additional information, called a watermark, is embedded in the host video. This watermark can be extracted at the decoder and used to determine whether the video content is watermarked. This paper presents a review of the digital video watermarking techniques in which their applications, challenges, and important properties are discussed, and categorizes them based on the domain in which they embed the watermark. It then provides an overview of a few emerging innovative solutions using watermarks. Protecting a 3D video by watermarking is an emerging area of research. The relevant 3D video watermarking techniques in the literature are classified based on the image-based representations of a 3D video in stereoscopic, depth-image-based rendering, and multi-view video watermarking. We discuss each technique, and then present a survey of the literature. Finally, we provide a summary of this paper and propose some future research directions.",
"title": ""
},
{
"docid": "5e333f4620908dc643ceac8a07ff2a2d",
"text": "Convolutional Neural Networks (CNNs) have reached outstanding results in several complex visual recognition tasks, such as classification and scene parsing. CNNs are composed of multiple filtering layers that perform 2D convolutions over input images. The intrinsic parallelism in such a computation kernel makes it suitable to be effectively accelerated on parallel hardware. In this paper we propose a highly flexible and scalable architectural template for acceleration of CNNs on FPGA devices, based on the cooperation between a set of software cores and a parallel convolution engine that communicate via a tightly coupled L1 shared scratchpad. Our accelerator structure, tested on a Xilinx Zynq XC-Z7045 device, delivers peak performance up to 80 GMAC/s, corresponding to 100 MMAC/s for each DSP slice in the programmable fabric. Thanks to the flexible architecture, convolution operations can be scheduled in order to reduce input/output bandwidth down to 8 bytes per cycle without degrading the performance of the accelerator in most of the meaningful use-cases.",
"title": ""
},
{
"docid": "a4030b9aa31d4cc0a2341236d6f18b5a",
"text": "Generative adversarial networks (GANs) have achieved huge success in unsupervised learning. Most of GANs treat the discriminator as a classifier with the binary sigmoid cross entropy loss function. However, we find that the sigmoid cross entropy loss function will sometimes lead to the saturation problem in GANs learning. In this work, we propose to adopt the L2 loss function for the discriminator. The properties of the L2 loss function can improve the stabilization of GANs learning. With the usage of the L2 loss function, we propose the multi-class generative adversarial networks for the purpose of image generation with multiple classes. We evaluate the multi-class GANs on a handwritten Chinese characters dataset with 3740 classes. The experiments demonstrate that the multi-class GANs can generate elegant images on datasets with a large number of classes. Comparison experiments between the L2 loss function and the sigmoid cross entropy loss function are also conducted and the results demonstrate the stabilization of the L2 loss function.",
"title": ""
},
{
"docid": "c93a401b7ed3031ed6571bfbbf1078c8",
"text": "In this paper we propose a new footstep detection technique for data acquired using a triaxial geophone. The idea evolves from the investigation of geophone transduction principle. The technique exploits the randomness of neighbouring data vectors observed when the footstep is absent. We extend the same principle for triaxial signal denoising. Effectiveness of the proposed technique for transient detection and denoising are presented for real seismic data collected using a triaxial geophone.",
"title": ""
},
{
"docid": "f1559798e0338074f28ca4aaf953b6a1",
"text": "Example classifications (test set) [And09] Andriluka et al. Pictorial structures revisited: People detection and articulated pose estimation. In CVPR, 2009 [Eic09] Eichner et al. Articulated Human Pose Estimation and Search in (Almost) Unconstrained Still Images. In IJCV, 2012 [Sap10] Sapp et al. Cascaded models for articulated pose estimation. In ECCV, 2010 [Yan11] Yang and Ramanan. Articulated pose estimation with flexible mixturesof-parts. In CVPR, 2011. References Human Pose Estimation (HPE) Algorithm Input",
"title": ""
},
{
"docid": "6a74c2d26f5125237929031cf1ccf204",
"text": "Harnessing crowds can be a powerful mechanism for increasing innovation. However, current approaches to crowd innovation rely on large numbers of contributors generating ideas independently in an unstructured way. We introduce a new approach called distributed analogical idea generation, which aims to make idea generation more effective and less reliant on chance. Drawing from the literature in cognitive science on analogy and schema induction, our approach decomposes the creative process in a structured way amenable to using crowds. In three experiments we show that distributed analogical idea generation leads to better ideas than example-based approaches, and investigate the conditions under which crowds generate good schemas and ideas. Our results have implications for improving creativity and building systems for distributed crowd innovation.",
"title": ""
},
{
"docid": "bd38c3f62798ed1f0b1e2baa6462123c",
"text": "The key issue in image fusion is the process of defining evaluation indices for the output image and for multi-scale image data set. This paper attempted to develop a fusion model for plantar pressure distribution images, which is expected to contribute to feature points construction based on shoe-last surface generation and modification. First, the time series plantar pressure distribution image was preprocessed, including back removing and Laplacian of Gaussian (LoG) filter. Then, discrete wavelet transform and a multi-scale pixel conversion fusion operating using a parameter estimation optimized Gaussian mixture model (PEO-GMM) were performed. The output image was used in a fuzzy weighted evaluation system, that included the following evaluation indices: mean, standard deviation, entropy, average gradient, and spatial frequency; the difference with the reference image, including the root mean square error, signal to noise ratio (SNR), and the peak SNR; and the difference with source image including the cross entropy, joint entropy, mutual information, deviation index, correlation coefficient, and the degree of distortion. These parameters were used to evaluate the results of the comprehensive evaluation value for the synthesized image. The image reflected the fusion of plantar pressure distribution using the proposed method compared with other fusion methods, such as up-down, mean-mean, and max-min fusion. The experimental results showed that the proposed LoG filtering with PEO-GMM fusion operator outperformed other methods.",
"title": ""
},
{
"docid": "2d0b170508ce03d649cf62ceef79a05a",
"text": "Gyroscope is one of the primary sensors for air vehicle navigation and controls. This paper investigates the noise characteristics of microelectromechanical systems (MEMS) gyroscope null drift and temperature compensation. This study mainly focuses on temperature as a long-term error source. An in-house-designed inertial measurement unit (IMU) is used to perform temperature effect testing in the study. The IMU is placed into a temperature control chamber. The chamber temperature is controlled to increase from 25 C to 80 C at approximately 0.8 degrees per minute. After that, the temperature is decreased to -40 C and then returns to 25 C. The null voltage measurements clearly demonstrate the rapidly changing short-term random drift and slowly changing long-term drift due to temperature variations. The characteristics of the short-term random drifts are analyzed and represented in probability density functions. A temperature calibration mechanism is established by using an artificial neural network to compensate the long-term drift. With the temperature calibration, the attitude computation problem due to gyro drifts can be improved significantly.",
"title": ""
},
{
"docid": "3c53d2589875a60b6c85cb8873a7c9a8",
"text": "presenting with bullous pemphigoid-like lesions. Dermatol Online J 2006; 12: 19. 3 Bhawan J, Milstone E, Malhotra R, et al. Scabies presenting as bullous pemphigoid-like eruption. J Am Acad Dermatol 1991; 24: 179–181. 4 Ostlere LS, Harris D, Rustin MH. Scabies associated with a bullous pemphigoid-like eruption. Br J Dermatol 1993; 128: 217–219. 5 Parodi A, Saino M, Rebora A. Bullous pemphigoid-like scabies. Clin Exp Dermatol 1993; 18: 293. 6 Slawsky LD, Maroon M, Tyler WB, et al. Association of scabies with a bullous pemphigoid-like eruption. J Am Acad Dermatol 1996; 34: 878–879. 7 Chen MC, Luo DQ. Bullous scabies failing to respond to glucocorticoids, immunoglobulin, and cyclophosphamide. Int J Dermatol 2014; 53: 265–266. 8 Nakamura E, Taniguchi H, Ohtaki N. A case of crusted scabies with a bullous pemphigoid-like eruption and nail involvement. J Dermatol 2006; 33: 196–201. 9 Galvany Rossell L, Salleras Redonnet M, Umbert Millet P. Bullous scabies responding to ivermectin therapy. Actas Dermosifiliogr 2010; 101: 81–84. 10 Gutte RM. Bullous scabies in an adult: a case report with review of literature. Indian Dermatol Online J 2013; 4: 311–313.",
"title": ""
},
{
"docid": "43100f1c6563b4af125c1c6040daa437",
"text": "Humans can naturally understand an image in depth with the aid of rich knowledge accumulated from daily lives or professions. For example, to achieve fine-grained image recognition (e.g., categorizing hundreds of subordinate categories of birds) usually requires a comprehensive visual concept organization including category labels and part-level attributes. In this work, we investigate how to unify rich professional knowledge with deep neural network architectures and propose a Knowledge-Embedded Representation Learning (KERL) framework for handling the problem of fine-grained image recognition. Specifically, we organize the rich visual concepts in the form of knowledge graph and employ a Gated Graph Neural Network to propagate node message through the graph for generating the knowledge representation. By introducing a novel gated mechanism, our KERL framework incorporates this knowledge representation into the discriminative image feature learning, i.e., implicitly associating the specific attributes with the feature maps. Compared with existing methods of fine-grained image classification, our KERL framework has several appealing properties: i) The embedded high-level knowledge enhances the feature representation, thus facilitating distinguishing the subtle differences among subordinate categories. ii) Our framework can learn feature maps with a meaningful configuration that the highlighted regions finely accord with the nodes (specific attributes) of the knowledge graph. Extensive experiments on the widely used CaltechUCSD bird dataset demonstrate the superiority of ∗Corresponding author is Liang Lin (Email: [email protected]). This work was supported by the National Natural Science Foundation of China under Grant 61622214, the Science and Technology Planning Project of Guangdong Province under Grant 2017B010116001, and Guangdong Natural Science Foundation Project for Research Teams under Grant 2017A030312006. head-pattern: masked Bohemian",
"title": ""
},
{
"docid": "8e878e5083d922d97f8d573c54cbb707",
"text": "Deep neural networks have become the stateof-the-art models in numerous machine learning tasks. However, general guidance to network architecture design is still missing. In our work, we bridge deep neural network design with numerical differential equations. We show that many effective networks, such as ResNet, PolyNet, FractalNet and RevNet, can be interpreted as different numerical discretizations of differential equations. This finding brings us a brand new perspective on the design of effective deep architectures. We can take advantage of the rich knowledge in numerical analysis to guide us in designing new and potentially more effective deep networks. As an example, we propose a linear multi-step architecture (LM-architecture) which is inspired by the linear multi-step method solving ordinary differential equations. The LMarchitecture is an effective structure that can be used on any ResNet-like networks. In particular, we demonstrate that LM-ResNet and LMResNeXt (i.e. the networks obtained by applying the LM-architecture on ResNet and ResNeXt respectively) can achieve noticeably higher accuracy than ResNet and ResNeXt on both CIFAR and ImageNet with comparable numbers of trainable parameters. In particular, on both CIFAR and ImageNet, LM-ResNet/LM-ResNeXt can significantly compress the original networkSchool of Mathematical Sciences, Peking University, Beijing, China MGH/BWH Center for Clinical Data Science, Masschusetts General Hospital, Harvard Medical School Center for Data Science in Health and Medicine, Peking University Laboratory for Biomedical Image Analysis, Beijing Institute of Big Data Research Beijing International Center for Mathematical Research, Peking University Center for Data Science, Peking University. Correspondence to: Bin Dong <[email protected]>, Quanzheng Li <[email protected]>. Proceedings of the 35 th International Conference on Machine Learning, Stockholm, Sweden, PMLR 80, 2018. Copyright 2018 by the author(s). s while maintaining a similar performance. This can be explained mathematically using the concept of modified equation from numerical analysis. Last but not least, we also establish a connection between stochastic control and noise injection in the training process which helps to improve generalization of the networks. Furthermore, by relating stochastic training strategy with stochastic dynamic system, we can easily apply stochastic training to the networks with the LM-architecture. As an example, we introduced stochastic depth to LM-ResNet and achieve significant improvement over the original LM-ResNet on CIFAR10.",
"title": ""
},
{
"docid": "4fea6fb309d496f9b4fd281c80a8eed7",
"text": "Network alignment is the problem of matching the nodes of two graphs, maximizing the similarity of the matched nodes and the edges between them. This problem is encountered in a wide array of applications---from biological networks to social networks to ontologies---where multiple networked data sources need to be integrated. Due to the difficulty of the task, an accurate alignment can rarely be found without human assistance. Thus, it is of great practical importance to develop network alignment algorithms that can optimally leverage experts who are able to provide the correct alignment for a small number of nodes. Yet, only a handful of existing works address this active network alignment setting.\n The majority of the existing active methods focus on absolute queries (\"are nodes a and b the same or not?\"), whereas we argue that it is generally easier for a human expert to answer relative queries (\"which node in the set b1,...,bn is the most similar to node a?\"). This paper introduces two novel relative-query strategies, TopMatchings and GibbsMatchings, which can be applied on top of any network alignment method that constructs and solves a bipartite matching problem. Our methods identify the most informative nodes to query by sampling the matchings of the bipartite graph associated to the network-alignment instance.\n We compare the proposed approaches to several commonly-used query strategies and perform experiments on both synthetic and real-world datasets. Our sampling-based strategies yield the highest overall performance, outperforming all the baseline methods by more than 15 percentage points in some cases. In terms of accuracy, TopMatchings and GibbsMatchings perform comparably. However, GibbsMatchings is significantly more scalable, but it also requires hyperparameter tuning for a temperature parameter.",
"title": ""
},
{
"docid": "78b61359d8668336b198af9ad59fe149",
"text": "This paper discusses a fuzzy cost-based failure modes, effects, and criticality analysis (FMECA) approach for wind turbines. Conventional FMECA methods use a crisp risk priority number (RPN) as a measure of criticality which suffers from the difficulty of quantifying the risk. One method of increasing wind turbine reliability is to install a condition monitoring system (CMS). The RPN can be reduced with the help of a CMS because faults can be detected at an incipient level, and preventive maintenance can be scheduled. However, the cost of installing a CMS cannot be ignored. The fuzzy cost-based FMECA method proposed in this paper takes into consideration the cost of a CMS and the benefits it brings and provides a method for determining whether it is financially profitable to install a CMS. The analysis is carried out in MATLAB® which provides functions for fuzzy logic operation and defuzzification.",
"title": ""
},
{
"docid": "11bff8c8ed48fc53c841bafcaf2a04dd",
"text": "Co-Attentions are highly effective attention mechanisms for text matching applications. Co-Attention enables the learning of pairwise attentions, i.e., learning to attend based on computing word-level affinity scores between two documents. However, text matching problems can exist in either symmetrical or asymmetrical domains. For example, paraphrase identification is a symmetrical task while question-answer matching and entailment classification are considered asymmetrical domains. In this paper, we argue that Co-Attention models in asymmetrical domains require different treatment as opposed to symmetrical domains, i.e., a concept of word-level directionality should be incorporated while learning word-level similarity scores. Hence, the standard inner product in real space commonly adopted in co-attention is not suitable. This paper leverages attractive properties of the complex vector space and proposes a co-attention mechanism based on the complex-valued inner product (Hermitian products). Unlike the real dot product, the dot product in complex space is asymmetric because the first item is conjugated. Aside from modeling and encoding directionality, our proposed approach also enhances the representation learning process. Extensive experiments on five text matching benchmark datasets demonstrate the effectiveness of our approach.",
"title": ""
},
{
"docid": "bb94ac9ac0c1e1f1155fc56b13bc103e",
"text": "In contrast to the Android application layer, Android’s application framework’s internals and their influence on the platform security and user privacy are still largely a black box for us. In this paper, we establish a static runtime model of the application framework in order to study its internals and provide the first high-level classification of the framework’s protected resources. We thereby uncover design patterns that differ highly from the runtime model at the application layer. We demonstrate the benefits of our insights for security-focused analysis of the framework by re-visiting the important use-case of mapping Android permissions to framework/SDK API methods. We, in particular, present a novel mapping based on our findings that significantly improves on prior results in this area that were established based on insufficient knowledge about the framework’s internals. Moreover, we introduce the concept of permission locality to show that although framework services follow the principle of separation of duty, the accompanying permission checks to guard sensitive operations violate it.",
"title": ""
},
{
"docid": "347c3929efc37dee3230189e576f14ab",
"text": "Attribute-based encryption (ABE) is a vision of public key encryption that allows users to encrypt and decrypt messages based on user attributes. This functionality comes at a cost. In a typical implementation, the size of the ciphertext is proportional to the number of attributes associated with it and the decryption time is proportional to the number of attributes used during decryption. Specifically, many practical ABE implementations require one pairing operation per attribute used during decryption. This work focuses on designing ABE schemes with fast decryption algorithms. We restrict our attention to expressive systems without systemwide bounds or limitations, such as placing a limit on the number of attributes used in a ciphertext or a private key. In this setting, we present the first key-policy ABE system where ciphertexts can be decrypted with a constant number of pairings. We show that GPSW ciphertexts can be decrypted with only 2 pairings by increasing the private key size by a factor of |Γ |, where Γ is the set of distinct attributes that appear in the private key. We then present a generalized construction that allows each system user to independently tune various efficiency tradeoffs to their liking on a spectrum where the extremes are GPSW on one end and our very fast scheme on the other. This tuning requires no changes to the public parameters or the encryption algorithm. Strategies for choosing an individualized user optimization plan are discussed. Finally, we discuss how these ideas can be translated into the ciphertext-policy ABE setting at a higher cost.",
"title": ""
},
{
"docid": "1468a09c57b2d83181de06236386d323",
"text": "This article provides an overview of the pathogenesis of type 2 diabetes mellitus. Discussion begins by describing normal glucose homeostasis and ingestion of a typical meal and then discusses glucose homeostasis in diabetes. Topics covered include insulin secretion in type 2 diabetes mellitus and insulin resistance, the site of insulin resistance, the interaction between insulin sensitivity and secretion, the role of adipocytes in the pathogenesis of type 2 diabetes, cellular mechanisms of insulin resistance including glucose transport and phosphorylation, glycogen and synthesis,glucose and oxidation, glycolysis, and insulin signaling.",
"title": ""
},
{
"docid": "834bc1349d6da53c277ddd7eba95dc6a",
"text": "Lymphedema is a common condition frequently seen in cancer patients who have had lymph node dissection +/- radiation treatment. Traditional management is mainly non-surgical and unsatisfactory. Surgical treatment has relied on excisional techniques in the past. Physiologic operations have more recently been devised to help improve this condition. Assessing patients and deciding which of the available operations to offer them can be challenging. MRI is an extremely useful tool in patient assessment and treatment planning. J. Surg. Oncol. 2017;115:18-22. © 2016 Wiley Periodicals, Inc.",
"title": ""
},
{
"docid": "e49d1f0aa79a2913131010c9f4d88bcf",
"text": "Low power consumption is crucial for medical implant devices. A single-chip, very-low-power interface IC used in implantable pacemaker systems is presented. It contains amplifiers, filters, ADCs, battery management system, voltage multipliers, high voltage pulse generators, programmable logic and timing control. A few circuit techniques are proposed to achieve nanopower circuit operations within submicron CMOS process. Subthreshold transistor designs and switched-capacitor circuits are widely used. The 200 k transistor IC occupies 49 mm/sup 2/, is fabricated in a 0.5-/spl mu/m two-poly three-metal multi-V/sub t/ process, and consumes 8 /spl mu/W.",
"title": ""
},
{
"docid": "73f6ba4ad9559cd3c6f7a88223e4b556",
"text": "A recurring problem faced when training neural networks is that there is typically not enough data to maximize the generalization capability of deep neural networks. There are many techniques to address this, including data augmentation, dropout, and transfer learning. In this paper, we introduce an additional method, which we call smart augmentation and we show how to use it to increase the accuracy and reduce over fitting on a target network. Smart augmentation works, by creating a network that learns how to generate augmented data during the training process of a target network in a way that reduces that networks loss. This allows us to learn augmentations that minimize the error of that network. Smart augmentation has shown the potential to increase accuracy by demonstrably significant measures on all data sets tested. In addition, it has shown potential to achieve similar or improved performance levels with significantly smaller network sizes in a number of tested cases.",
"title": ""
}
] | scidocsrr |
507b1e11ba8732940248cb59695056c6 | Dimensions of peri-implant mucosa: an evaluation of maxillary anterior single implants in humans. | [
{
"docid": "42faf2c0053c9f6a0147fc66c8e4c122",
"text": "IN 1921, Gottlieb's discovery of the epithelial attachment of the gingiva opened new horizons which served as the basis for a better understanding of the biology of the dental supporting tissues in health and disease. Three years later his pupils, Orban and Kohler (1924), undertook the task of measuring the epithelial attachment as well as the surrounding tissue relations during the four phases of passive eruption of the tooth. Gottlieb and Orban's descriptions of the epithelial attachment unveiled the exact morphology of this epithelial structure, and clarified the relation of this",
"title": ""
}
] | [
{
"docid": "9308c1dfdf313f6268db9481723f533d",
"text": "We report the discovery of a highly active Ni-Co alloy electrocatalyst for the oxidation of hydrazine (N(2)H(4)) and provide evidence for competing electrochemical (faradaic) and chemical (nonfaradaic) reaction pathways. The electrochemical conversion of hydrazine on catalytic surfaces in fuel cells is of great scientific and technological interest, because it offers multiple redox states, complex reaction pathways, and significantly more favorable energy and power densities compared to hydrogen fuel. Structure-reactivity relations of a Ni(60)Co(40) alloy electrocatalyst are presented with a 6-fold increase in catalytic N(2)H(4) oxidation activity over today's benchmark catalysts. We further study the mechanistic pathways of the catalytic N(2)H(4) conversion as function of the applied electrode potential using differentially pumped electrochemical mass spectrometry (DEMS). At positive overpotentials, N(2)H(4) is electrooxidized into nitrogen consuming hydroxide ions, which is the fuel cell-relevant faradaic reaction pathway. In parallel, N(2)H(4) decomposes chemically into molecular nitrogen and hydrogen over a broad range of electrode potentials. The electroless chemical decomposition rate was controlled by the electrode potential, suggesting a rare example of a liquid-phase electrochemical promotion effect of a chemical catalytic reaction (\"EPOC\"). The coexisting electrocatalytic (faradaic) and heterogeneous catalytic (electroless, nonfaradaic) reaction pathways have important implications for the efficiency of hydrazine fuel cells.",
"title": ""
},
{
"docid": "bdb41d1633c603f4b68dfe0191eb822b",
"text": "Concepts are the elementary units of reason and linguistic meaning. They are conventional and relatively stable. As such, they must somehow be the result of neural activity in the brain. The questions are: Where? and How? A common philosophical position is that all concepts-even concepts about action and perception-are symbolic and abstract, and therefore must be implemented outside the brain's sensory-motor system. We will argue against this position using (1) neuroscientific evidence; (2) results from neural computation; and (3) results about the nature of concepts from cognitive linguistics. We will propose that the sensory-motor system has the right kind of structure to characterise both sensory-motor and more abstract concepts. Central to this picture are the neural theory of language and the theory of cogs, according to which, brain structures in the sensory-motor regions are exploited to characterise the so-called \"abstract\" concepts that constitute the meanings of grammatical constructions and general inference patterns.",
"title": ""
},
{
"docid": "72196b0a2eed5e9747d90593cdd0684d",
"text": "Advanced silicon (Si) node technology development is moving to 10/7nm technology and pursuing die size reduction, efficiency enhancement and lower power consumption for mobile applications in the semiconductor industry. The flip chip chip scale package (fcCSP) has been viewed as an attractive solution to achieve the miniaturization of die size, finer bump pitch, finer line width and spacing (LW/LS) substrate requirements, and is widely adopted in mobile devices to satisfy the increasing demands of higher performance, higher bandwidth, and lower power consumption as well as multiple functions. The utilization of mass reflow (MR) chip attach process in a fcCSP with copper (Cu) pillar bumps, embedded trace substrate (ETS) technology and molded underfill (MUF) is usually viewed as the cost-efficient solution. However, when finer bump pitch and LW/LS with an escaped trace are designed in flip chip MR process, a higher risk of a bump to trace short can occur. In order to reduce the risk of bump to trace short as well as extremely low-k (ELK) damage in a fcCSP with advanced Si node, the thermo-compression bonding (TCB) and TCB with non-conductive paste (TCNCP) have been adopted, although both methodologies will cause a higher assembly cost due to the lower units per hour (UPH) assembly process. For the purpose of delivering a cost-effective chip attach process as compared to TCB/TCNCP methodologies as well as reducing the risk of bump to trace as compared to the MR process, laser assisted bonding (LAB) chip attach methodology was studied in a 15x15mm fcCSP with 10nm backend process daisy-chain die for this paper. Using LAB chip attach technology can increase the UPH by more than 2-times over TCB and increase the UPH 5-times compared to TCNCP. To realize the ELK performance of a 10nm fcCSP with fine bump pitch of $60 \\mu \\mathrm{m}$ and $90 \\mu \\mathrm{m}$ as well as 2-layer ETS with two escaped traces design, the quick temperature cycling (QTC) test was performed after the LAB chip attach process. The comparison of polyimide (PI) layer Cu pillar bumps to non-PI Cu pillar bumps (without a PI layer) will be discussed to estimate the 10nm ELK performance. The evaluated result shows that the utilization of LAB can not only achieve a bump pitch reduction with a finer LW/LS substrate with escaped traces in the design, but it also validates ELK performance and Si node reduction. Therefore, the illustrated LAB chip attach processes examined here can guarantee the assembly yield with less ELK damage risk in a 10nm fcCSP with finer bump pitch and substrate finer LW/LS design in the future.",
"title": ""
},
{
"docid": "bc8950644ded24618a65c4fcef302044",
"text": "Child maltreatment is a pervasive problem in our society that has long-term detrimental consequences to the development of the affected child such as future brain growth and functioning. In this paper, we surveyed empirical evidence on the neuropsychological effects of child maltreatment, with a special emphasis on emotional, behavioral, and cognitive process–response difficulties experienced by maltreated children. The alteration of the biochemical stress response system in the brain that changes an individual’s ability to respond efficiently and efficaciously to future stressors is conceptualized as the traumatic stress response. Vulnerable brain regions include the hypothalamic–pituitary–adrenal axis, the amygdala, the hippocampus, and prefrontal cortex and are linked to children’s compromised ability to process both emotionally-laden and neutral stimuli in the future. It is suggested that information must be garnered from varied literatures to conceptualize a research framework for the traumatic stress response in maltreated children. This research framework suggests an altered developmental trajectory of information processing and emotional dysregulation, though much debate still exists surrounding the correlational nature of empirical studies, the potential of resiliency following childhood trauma, and the extent to which early interventions may facilitate recovery.",
"title": ""
},
{
"docid": "8d070d8506d8a83ce78bde0e19f28031",
"text": "Although amyotrophic lateral sclerosis and its variants are readily recognised by neurologists, about 10% of patients are misdiagnosed, and delays in diagnosis are common. Prompt diagnosis, sensitive communication of the diagnosis, the involvement of the patient and their family, and a positive care plan are prerequisites for good clinical management. A multidisciplinary, palliative approach can prolong survival and maintain quality of life. Treatment with riluzole improves survival but has a marginal effect on the rate of functional deterioration, whereas non-invasive ventilation prolongs survival and improves or maintains quality of life. In this Review, we discuss the diagnosis, management, and how to cope with impaired function and end of life on the basis of our experience, the opinions of experts, existing guidelines, and clinical trials. We highlight the need for research on the effectiveness of gastrostomy, access to non-invasive ventilation and palliative care, communication between the care team, the patient and his or her family, and recognition of the clinical and social effects of cognitive impairment. We recommend that the plethora of evidence-based guidelines should be compiled into an internationally agreed guideline of best practice.",
"title": ""
},
{
"docid": "c3365370cdbf4afe955667f575d1fbb6",
"text": "One of the overriding interests of the literature on health care economics is to discover where personal choice in market economies end and corrective government intervention should begin. Our study addresses this question in the context of John Stuart Mill's utilitarian principle of harm. Our primary objective is to determine whether public policy interventions concerning more than 35,000 online pharmacies worldwide are necessary and efficient compared to traditional market-oriented approaches. Secondly, we seek to determine whether government interference could enhance personal utility maximization, despite its direct and indirect (unintended) costs on medical e-commerce. This study finds that containing the negative externalities of medical e-commerce provides the most compelling raison d'etre of government interference. It asserts that autonomy and paternalism need not be mutually exclusive, despite their direct and indirect consequences on individual choice and decision-making processes. Valuable insights derived from Mill's principle should enrich theory-building in health care economics and policy.",
"title": ""
},
{
"docid": "72eceddfa08e73739022df7c0dc89a3a",
"text": "The empirical mode decomposition (EMD) proposed by Huang et al. in 1998 shows remarkably effective in analyzing nonlinear signals. It adaptively represents nonstationary signals as sums of zero-mean amplitude modulation-frequency modulation (AM-FM) components by iteratively conducting the sifting process. How to determine the boundary conditions of the cubic spline when constructing the envelopes of data is the critical issue of the sifting process. A simple bound hit process technique is presented in this paper which constructs two periodic series from the original data by even and odd extension and then builds the envelopes using cubic spline with periodic boundary condition. The EMD is conducted fluently without any assumptions of the processed data by this approach. An example is presented to pick out the weak modulation of internal waves from an Envisat ASAR image by EMD with the boundary process technique",
"title": ""
},
{
"docid": "e2ea8ec9139837feb95ac432a63afe88",
"text": "Augmented and virtual reality have the potential of being indistinguishable from the real world. Holographic displays, including head mounted units, support this vision by creating rich stereoscopic scenes, with objects that appear to float in thin air - often within arm's reach. However, one has but to reach out and grasp nothing but air to destroy the suspension of disbelief. Snake-charmer is an attempt to provide physical form to virtual objects by revisiting the concept of Robotic Graphics or Encountered-type Haptic interfaces with current commodity hardware. By means of a robotic arm, Snake-charmer brings physicality to a virtual scene and explores what it means to truly interact with an object. We go beyond texture and position simulation and explore what it means to have a physical presence inside a virtual scene. We demonstrate how to render surface characteristics beyond texture and position, including temperature; how to physically move objects; and how objects can physically interact with the user's hand. We analyze our implementation, present the performance characteristics, and provide guidance for the construction of future physical renderers.",
"title": ""
},
{
"docid": "03368de546daf96d5111325f3d08fd3d",
"text": "Despite the widespread use of social media by students and its increased use by instructors, very little empirical evidence is available concerning the impact of social media use on student learning and engagement. This paper describes our semester-long experimental study to determine if using Twitter – the microblogging and social networking platform most amenable to ongoing, public dialogue – for educationally relevant purposes can impact college student engagement and grades. A total of 125 students taking a first year seminar course for pre-health professional majors participated in this study (70 in the experimental group and 55 in the control group). With the experimental group, Twitter was used for various types of academic and co-curricular discussions. Engagement was quantified by using a 19-item scale based on the National Survey of Student Engagement. To assess differences in engagement and grades, we used mixed effects analysis of variance (ANOVA) models, with class sections nested within treatment groups. We also conducted content analyses of samples of Twitter exchanges. The ANOVA results showed that the experimental group had a significantly greater increase in engagement than the control group, as well as higher semester grade point averages. Analyses of Twitter communications showed that students and faculty were both highly engaged in the learning process in ways that transcended traditional classroom activities. This study provides experimental evidence that Twitter can be used as an educational tool to help engage students and to mobilize faculty into a more active and participatory role.",
"title": ""
},
{
"docid": "ed3b4ace00c68e9ad2abe6d4dbdadfcb",
"text": "With decreasing costs of high-quality surveillance systems, human activity detection and tracking has become increasingly practical. Accordingly, automated systems have been designed for numerous detection tasks, but the task of detecting illegally parked vehicles has been left largely to the human operators of surveillance systems. We propose a methodology for detecting this event in real time by applying a novel image projection that reduces the dimensionality of the data and, thus, reduces the computational complexity of the segmentation and tracking processes. After event detection, we invert the transformation to recover the original appearance of the vehicle and to allow for further processing that may require 2-D data. We evaluate the performance of our algorithm using the i-LIDS vehicle detection challenge datasets as well as videos we have taken ourselves. These videos test the algorithm in a variety of outdoor conditions, including nighttime video and instances of sudden changes in weather.",
"title": ""
},
{
"docid": "14cc3608216dd17e7bcbc3e6acba66db",
"text": "Fluorescamine is a new reagent for the detection of primary amines in the picomole range. Its reaction with amines is almost instantaneous at room temperature in aqueous media. The products are highly fluorescent, whereas the reagent and its degradation products are nonfluorescent. Applications are discussed.",
"title": ""
},
{
"docid": "5e6994d8e9cc3af1371a24ac73058a82",
"text": "The first method that was developed to deal with the SLAM problem is based on the extended Kalman filter, EKF SLAM. However this approach cannot be applied to a large environments because of the quadratic complexity and data association problem. The second approach to address the SLAM problem is based on the Rao-Blackwellized Particle filter FastSLAM, which follows a large number of hypotheses that represent the different possible trajectories, each trajectory carries its own map, its complexity increase logarithmically with the number of landmarks in the map. In this paper we will present the result of an implementation of the FastSLAM 2.0 on an open multimedia applications processor, based on a monocular camera as an exteroceptive sensor. A parallel implementation of this algorithm was achieved. Results aim to demonstrate that an optimized algorithm implemented on a low cost architecture is suitable to design an embedded system for SLAM applications.",
"title": ""
},
{
"docid": "ad0a69f92d511e02a24b8d77d3a17641",
"text": "Requirement engineering is an integral part of the software development lifecycle since the basis for developing successful software depends on comprehending its requirements in the first place. Requirement engineering involves a number of processes for gathering requirements in accordance with the needs and demands of users and stakeholders of the software product. In this paper, we have reviewed the prominent processes, tools and technologies used in the requirement gathering phase. The study is useful to perceive the current state of the affairs pertaining to the requirement engineering research and to understand the strengths and limitations of the existing requirement engineering techniques. The study also summarizes the best practices and how to use a blend of the requirement engineering techniques as an effective methodology to successfully conduct the requirement engineering task. The study also highlights the importance of security requirements as though they are part of the nonfunctional requirement, yet are naturally considered fundamental to secure software development.",
"title": ""
},
{
"docid": "04fc127c1b6e915060c2f3035aa5067b",
"text": "Recent technological advances have enabled human users to interact with computers in ways previously unimaginable. Beyond the confines of the keyboard and mouse, new modalities for human-computer interaction such as voice, gesture, and force-feedback are emerging. Despite important advances, one necessary ingredient for natural interaction is still missing–emotions. Emotions play an important role in human-to-human communication and interaction, allowing people to express themselves beyond the verbal domain. The ability to understand human emotions is desirable for the computer in several applications. This paper explores new ways of human-computer interaction that enable the computer to be more aware of the user’s emotional and attentional expressions. We present the basic research in the field and the recent advances into the emotion recognition from facial, voice, and pshysiological signals, where the different modalities are treated independently. We then describe the challenging problem of multimodal emotion recognition and we advocate the use of probabilistic graphical models when fusing the different modalities. We also discuss the difficult issues of obtaining reliable affective data, obtaining ground truth for emotion recognition, and the use of unlabeled data.",
"title": ""
},
{
"docid": "d603e92c3f3c8ab6a235631ee3a55d52",
"text": "This work focuses on algorithms which learn from examples to perform multiclass text and speech categorization tasks. We rst show how to extend the standard notion of classiication by allowing each instance to be associated with multiple labels. We then discuss our approach for multiclass multi-label text categorization which is based on a new and improved family of boosting algorithms. We describe in detail an implementation, called BoosTexter, of the new boosting algorithms for text categorization tasks. We present results comparing the performance of BoosTexter and a number of other text-categorization algorithms on a variety of tasks. We conclude by describing the application of our system to automatic call-type identiication from unconstrained spoken customer responses.",
"title": ""
},
{
"docid": "cc5d670e751090b29ee9365e840a70c2",
"text": "The Web today provides a corpus of design examples unparalleled in human history. However, leveraging existing designs to produce new pages is currently difficult. This paper introduces the Bricolage algorithm for automatically transferring design and content between Web pages. Bricolage introduces a novel structuredprediction technique that learns to create coherent mappings between pages by training on human-generated exemplars. The produced mappings can then be used to automatically transfer the content from one page into the style and layout of another. We show that Bricolage can learn to accurately reproduce human page mappings, and that it provides a general, efficient, and automatic technique for retargeting content between a variety of real Web pages.",
"title": ""
},
{
"docid": "6c72d16c788509264f573a322c9ebaf6",
"text": "A 5-year clinical and laboratory study of Nigerian children with renal failure (RF) was performed to determine the factors that limited their access to dialysis treatment and what could be done to improve access. There were 48 boys and 33 girls (aged 20 days to 15 years). Of 81 RF patients, 55 were eligible for dialysis; 33 indicated ability to afford dialysis, but only 6 were dialyzed, thus giving a dialysis access rate of 10.90% (6/55). Ability to bear dialysis cost/dialysis accessibility ratio was 5.5:1 (33/6). Factors that limited access to dialysis treatment in our patients included financial restrictions from parents (33%), no parental consent for dialysis (6%), lack or failure of dialysis equipment (45%), shortage of dialysis personnel (6%), reluctance of renal staff to dialyze (6%), and late presentation in hospital (4%). More deaths were recorded among undialyzed than dialyzed patients (P<0.01); similarly, undialyzed patients had more deaths compared with RF patients who required no dialysis (P<0.025). Since most of our patients could not be dialyzed owing to a range of factors, preventive nephrology is advocated to reduce the morbidity and mortality from RF due to preventable diseases.",
"title": ""
},
{
"docid": "22cc9e5487975f8b7ca400ad69504107",
"text": "IMSI Catchers are tracking devices that break the privacy of the subscribers of mobile access networks, with disruptive effects to both the communication services and the trust and credibility of mobile network operators. Recently, we verified that IMSI Catcher attacks are really practical for the state-of-the-art 4G/LTE mobile systems too. Our IMSI Catcher device acquires subscription identities (IMSIs) within an area or location within a few seconds of operation and then denies access of subscribers to the commercial network. Moreover, we demonstrate that these attack devices can be easily built and operated using readily available tools and equipment, and without any programming. We describe our experiments and procedures that are based on commercially available hardware and unmodified open source software.",
"title": ""
},
{
"docid": "dbd504abdff9b5bd80a88f19c3cd7715",
"text": "L'hamartome lipomateux superficiel de Hoffmann-Zurhelle est une tumeur bénigne souvent congénitale. Histologiquement, il est caractérisé par la présence hétérotopique de cellules adipeuses quelquefois lipoblastiques autour des trajets vasculaires dermiques. Nous rapportons une nouvelle observation de forme multiple à révélation tardive chez une femme âgée de 31 ans sans antécédents pathologiques notables qui a été adressée à la consultation pour des papules et tumeurs asymptomatiques de couleur chaire se regroupent en placards à disposition linéaire et zostèriforme au niveau de la face externe de la cuisse droite depuis l'âge de 13 ans, augmentant progressivement de taille. L'étude histologique d'un fragment biopsique avait montré un épiderme régulier, plicaturé et kératinisant, soulevé par un tissu fibro-adipeux abondant incluant quelques vaisseaux sanguins aux dépens du derme moyen. Ces données cliniques et histologiques ont permis de retenir le diagnostic d'hamartome lipomateux superficiel. Une exérèse chirurgicale des tumeurs de grande taille a été proposée complété par le laser CO2 pour le reste de lésions cutanées. L'hamartome lipomateux superficiel est une lésion bénigne sans potentiel de malignité. L'exérèse chirurgicale peut être proposée si la lésion est gênante ou dans un but essentiellement esthétique. Pan African Medical Journal. 2015; 21:31 doi:10.11604/pamj.2015.21.31.4773 This article is available online at: http://www.panafrican-med-journal.com/content/article/21/31/full/ © Sanaa Krich et al. The Pan African Medical Journal ISSN 1937-8688. This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. Pan African Medical Journal – ISSN: 19378688 (www.panafrican-med-journal.com) Published in partnership with the African Field Epidemiology Network (AFENET). (www.afenet.net) Case report Open Access",
"title": ""
},
{
"docid": "058a89e44689faa0a2545b5b75fd8cb9",
"text": "cplint on SWISH is a web application that allows users to perform reasoning tasks on probabilistic logic programs. Both inference and learning systems can be performed: conditional probabilities with exact, rejection sampling and Metropolis-Hasting methods. Moreover, the system now allows hybrid programs, i.e., programs where some of the random variables are continuous. To perform inference on such programs likelihood weighting and particle filtering are used. cplint on SWISH is also able to sample goals’ arguments and to graph the results. This paper reports on advances and new features of cplint on SWISH, including the capability of drawing the binary decision diagrams created during the inference processes.",
"title": ""
}
] | scidocsrr |
c25516cd1ad53cdea15feb51571a2de6 | Suspecting Less and Doing Better: New Insights on Palmprint Identification for Faster and More Accurate Matching | [
{
"docid": "8fd5b3cead78b47e95119ac1a70e44db",
"text": "Two-dimensional (2-D) hand-geometry features carry limited discriminatory information and therefore yield moderate performance when utilized for personal identification. This paper investigates a new approach to achieve performance improvement by simultaneously acquiring and combining three-dimensional (3-D) and 2-D features from the human hand. The proposed approach utilizes a 3-D digitizer to simultaneously acquire intensity and range images of the presented hands of the users in a completely contact-free manner. Two new representations that effectively characterize the local finger surface features are extracted from the acquired range images and are matched using the proposed matching metrics. In addition, the characterization of 3-D palm surface using SurfaceCode is proposed for matching a pair of 3-D palms. The proposed approach is evaluated on a database of 177 users acquired in two sessions. The experimental results suggest that the proposed 3-D hand-geometry features have significant discriminatory information to reliably authenticate individuals. Our experimental results demonstrate that consolidating 3-D and 2-D hand-geometry features results in significantly improved performance that cannot be achieved with the traditional 2-D hand-geometry features alone. Furthermore, this paper also investigates the performance improvement that can be achieved by integrating five biometric features, i.e., 2-D palmprint, 3-D palmprint, finger texture, along with 3-D and 2-D hand-geometry features, that are simultaneously extracted from the user's hand presented for authentication.",
"title": ""
}
] | [
{
"docid": "978dd8a7f33df74d4a5cea149be6ebb0",
"text": "A tutorial on the design and development of automatic speakerrecognition systems is presented. Automatic speaker recognition is the use of a machine to recognize a person from a spoken phrase. These systems can operate in two modes: to identify a particular person or toverify a person’s claimed identity. Speech processing and the basic components of automatic speakerrecognition systems are shown and design tradeoffs are discussed. Then, a new automatic speaker-recognition system is given. This recognizer performs with 98.9% correct identification. Last, the performances of various systems are compared.",
"title": ""
},
{
"docid": "77564f157ea8ab43d6d9f95a212e7948",
"text": "We consider the problem of mining association rules on a shared-nothing multiprocessor. We present three algorithms that explore a spectrum of trade-oos between computation, communication, memory usage, synchronization, and the use of problem-speciic information. The best algorithm exhibits near perfect scaleup behavior, yet requires only minimal overhead compared to the current best serial algorithm.",
"title": ""
},
{
"docid": "54a47a57296658ca0e8bae74fd99e8f0",
"text": "Road traffic accidents are among the top leading causes of deaths and injuries of various levels. Ethiopia is experiencing highest rate of such accidents resulting in fatalities and various levels of injuries. Addis Ababa, the capital city of Ethiopia, takes the lion’s share of the risk having higher number of vehicles and traffic and the cost of these fatalities and injuries has a great impact on the socio-economic development of a society. This research is focused on developing adaptive regression trees to build a decision support system to handle road traffic accident analysis for Addis Ababa city traffic office. The study focused on injury severity levels resulting from an accident using real data obtained from the Addis Ababa traffic office. Empirical results show that the developed models could classify accidents within reasonable accuracy.",
"title": ""
},
{
"docid": "4667b31c7ee70f7bc3709fc40ec6140f",
"text": "This article presents a method for rectifying and stabilising video from cell-phones with rolling shutter (RS) cameras. Due to size constraints, cell-phone cameras have constant, or near constant focal length, making them an ideal application for calibrated projective geometry. In contrast to previous RS rectification attempts that model distortions in the image plane, we model the 3D rotation of the camera. We parameterise the camera rotation as a continuous curve, with knots distributed across a short frame interval. Curve parameters are found using non-linear least squares over inter-frame correspondences from a KLT tracker. By smoothing a sequence of reference rotations from the estimated curve, we can at a small extra cost, obtain a high-quality image stabilisation. Using synthetic RS sequences with associated ground-truth, we demonstrate that our rectification improves over two other methods. We also compare our video stabilisation with the methods in iMovie and Deshaker.",
"title": ""
},
{
"docid": "da7f869037f40ab8666009d85d9540ff",
"text": "A boomerang-shaped alar base excision is described to narrow the nasal base and correct the excessive alar flare. The boomerang excision combined the external alar wedge resection with an internal vestibular floor excision. The internal excision was inclined 30 to 45 degrees laterally to form the inner limb of the boomerang. The study included 46 patients presenting with wide nasal base and excessive alar flaring. All cases were followed for a mean period of 18 months (range, 8 to 36 months). The laterally oriented vestibular floor excision allowed for maximum preservation of the natural curvature of the alar rim where it meets the nostril floor and upon its closure resulted in a considerable medialization of alar lobule, which significantly reduced the amount of alar flare and the amount of external alar excision needed. This external alar excision measured, on average, 3.8 mm (range, 2 to 8 mm), which is significantly less than that needed when a standard vertical internal excision was used ( P < 0.0001). Such conservative external excisions eliminated the risk of obliterating the natural alar-facial crease, which did not occur in any of our cases. No cases of postoperative bleeding, infection, or vestibular stenosis were encountered. Keloid or hypertrophic scar formation was not encountered; however, dermabrasion of the scars was needed in three (6.5%) cases to eliminate apparent suture track marks. The boomerang alar base excision proved to be a safe and effective technique for narrowing the nasal base and elimination of the excessive flaring and resulted in a natural, well-proportioned nasal base with no obvious scarring.",
"title": ""
},
{
"docid": "da5ad61c492419515e8449b435b42e80",
"text": "Camera tracking is an important issue in many computer vision and robotics applications, such as, augmented reality and Simultaneous Localization And Mapping (SLAM). In this paper, a feature-based technique for monocular camera tracking is proposed. The proposed approach is based on tracking a set of sparse features, which are successively tracked in a stream of video frames. In the developed system, camera initially views a chessboard with known cell size for few frames to be enabled to construct initial map of the environment. Thereafter, Camera pose estimation for each new incoming frame is carried out in a framework that is merely working with a set of visible natural landmarks. Estimation of 6-DOF camera pose parameters is performed using a particle filter. Moreover, recovering depth of newly detected landmarks, a linear triangulation method is used. The proposed method is applied on real world videos and positioning error of the camera pose is less than 3 cm in average that indicates effectiveness and accuracy of the proposed method.",
"title": ""
},
{
"docid": "1edd6cb3c6ed4657021b6916efbc23d9",
"text": "Siamese-like networks, Streetscore-CNN (SS-CNN) and Ranking SS-CNN, to predict pairwise comparisons Figure 1: User Interface for Crowdsourced Online Game Performance Analysis • SS-CNN: We calculate the % of pairwise comparisons in test set predicted correctly by (1) Softmax of output neurons in final layer (2) comparing TrueSkill scores [2] obtained from synthetic pairwise comparisons from the CNN (3) extracting features from penultimate layer of CNN and feeding pairwise feature representations to a RankSVM [3] • RSS-CNN: We compare the ranking function outputs for both images in a test pair to decide which image wins, and calculate the binary prediction accuracy.",
"title": ""
},
{
"docid": "32977df591e90db67bf09b0412f56d7b",
"text": "In an electronic warfare (EW) battlefield environment, it is highly necessary for a fighter aircraft to intercept and identify the several interleaved radar signals that it receives from the surrounding emitters, so as to prepare itself for countermeasures. The main function of the Electronic Support Measure (ESM) receiver is to receive, measure, deinterleave pulses and then identify alternative threat emitters. Deinterleaving of radar signals is based on time of arrival (TOA) analysis and the use of the sequential difference (SDIF) histogram method for determining the pulse repetition interval (PRI), which is an important pulse parameter. Once the pulse repetition intervals are determined, check for the existence of staggered PRI (level-2) is carried out, implemented in MATLAB. Keywordspulse deinterleaving, pulse repetition interval, stagger PRI, sequential difference histogram, time of arrival.",
"title": ""
},
{
"docid": "8343f34186fc387bfe28db3f7b8bd5fc",
"text": "Two-stream Convolutional Networks (ConvNets) have shown strong performance for human action recognition in videos. Recently, Residual Networks (ResNets) have arisen as a new technique to train extremely deep architectures. In this paper, we introduce spatiotemporal ResNets as a combination of these two approaches. Our novel architecture generalizes ResNets for the spatiotemporal domain by introducing residual connections in two ways. First, we inject residual connections between the appearance and motion pathways of a two-stream architecture to allow spatiotemporal interaction between the two streams. Second, we transform pretrained image ConvNets into spatiotemporal networks by equipping them with learnable convolutional filters that are initialized as temporal residual connections and operate on adjacent feature maps in time. This approach slowly increases the spatiotemporal receptive field as the depth of the model increases and naturally integrates image ConvNet design principles. The whole model is trained end-to-end to allow hierarchical learning of complex spatiotemporal features. We evaluate our novel spatiotemporal ResNet using two widely used action recognition benchmarks where it exceeds the previous state-of-the-art.",
"title": ""
},
{
"docid": "ab6b26f6f1abf07aa91a0a933a7b6c43",
"text": "This paper describes a machine learningbased approach that uses word embedding features to recognize drug names from biomedical texts. As a starting point, we developed a baseline system based on Conditional Random Field (CRF) trained with standard features used in current Named Entity Recognition (NER) systems. Then, the system was extended to incorporate new features, such as word vectors and word clusters generated by the Word2Vec tool and a lexicon feature from the DINTO ontology. We trained the Word2vec tool over two different corpus: Wikipedia and MedLine. Our main goal is to study the effectiveness of using word embeddings as features to improve performance on our baseline system, as well as to analyze whether the DINTO ontology could be a valuable complementary data source integrated in a machine learning NER system. To evaluate our approach and compare it with previous work, we conducted a series of experiments on the dataset of SemEval-2013 Task 9.1 Drug Name Recognition.",
"title": ""
},
{
"docid": "a8d616897b7cbb1182d5f6e8cf4318a9",
"text": "User behaviour targeting is essential in online advertising. Compared with sponsored search keyword targeting and contextual advertising page content targeting, user behaviour targeting builds users’ interest profiles via tracking their online behaviour and then delivers the relevant ads according to each user’s interest, which leads to higher targeting accuracy and thus more improved advertising performance. The current user profiling methods include building keywords and topic tags or mapping users onto a hierarchical taxonomy. However, to our knowledge, there is no previous work that explicitly investigates the user online visits similarity and incorporates such similarity into their ad response prediction. In this work, we propose a general framework which learns the user profiles based on their online browsing behaviour, and transfers the learned knowledge onto prediction of their ad response. Technically, we propose a transfer learning model based on the probabilistic latent factor graphic models, where the users’ ad response profiles are generated from their online browsing profiles. The large-scale experiments based on real-world data demonstrate significant improvement of our solution over some strong baselines.",
"title": ""
},
{
"docid": "7cbe504e03ab802389c48109ed1f1802",
"text": "Despite recent breakthroughs in the applications of deep neural networks, one setting that presents a persistent challenge is that of “one-shot learning.” Traditional gradient-based networks require a lot of data to learn, often through extensive iterative training. When new data is encountered, the models must inefficiently relearn their parameters to adequately incorporate the new information without catastrophic interference. Architectures with augmented memory capacities, such as Neural Turing Machines (NTMs), offer the ability to quickly encode and retrieve new information, and hence can potentially obviate the downsides of conventional models. Here, we demonstrate the ability of a memory-augmented neural network to rapidly assimilate new data, and leverage this data to make accurate predictions after only a few samples. We also introduce a new method for accessing an external memory that focuses on memory content, unlike previous methods that additionally use memory locationbased focusing mechanisms.",
"title": ""
},
{
"docid": "b6a8f45bd10c30040ed476b9d11aa908",
"text": "PixelCNNs are a recently proposed class of powerful generative models with tractable likelihood. Here we discuss our implementation of PixelCNNs which we make available at https://github.com/openai/pixel-cnn. Our implementation contains a number of modifications to the original model that both simplify its structure and improve its performance. 1) We use a discretized logistic mixture likelihood on the pixels, rather than a 256-way softmax, which we find to speed up training. 2) We condition on whole pixels, rather than R/G/B sub-pixels, simplifying the model structure. 3) We use downsampling to efficiently capture structure at multiple resolutions. 4) We introduce additional short-cut connections to further speed up optimization. 5) We regularize the model using dropout. Finally, we present state-of-the-art log likelihood results on CIFAR-10 to demonstrate the usefulness of these modifications.",
"title": ""
},
{
"docid": "3ec9c5459d08204025edb57e05583f29",
"text": "Cell-based sensing represents a new paradigm for performing direct and accurate detection of cell- or tissue-specific responses by incorporating living cells or tissues as an integral part of a sensor. Here we report a new magnetic cell-based sensing platform by combining magnetic sensors implemented in the complementary metal-oxide-semiconductor (CMOS) integrated microelectronics process with cardiac progenitor cells that are differentiated directly on-chip. We show that the pulsatile movements of on-chip cardiac progenitor cells can be monitored in a real-time manner. Our work provides a new low-cost approach to enable high-throughput screening systems as used in drug development and hand-held devices for point-of-care (PoC) biomedical diagnostic applications.",
"title": ""
},
{
"docid": "a8edc02eb78637f18fc948d81397fc75",
"text": "When we are investigating an object in a data set, which itself may or may not be an outlier, can we identify unusual (i.e., outlying) aspects of the object? In this paper, we identify the novel problem of mining outlying aspects on numeric data. Given a query object $$o$$ o in a multidimensional numeric data set $$O$$ O , in which subspace is $$o$$ o most outlying? Technically, we use the rank of the probability density of an object in a subspace to measure the outlyingness of the object in the subspace. A minimal subspace where the query object is ranked the best is an outlying aspect. Computing the outlying aspects of a query object is far from trivial. A naïve method has to calculate the probability densities of all objects and rank them in every subspace, which is very costly when the dimensionality is high. We systematically develop a heuristic method that is capable of searching data sets with tens of dimensions efficiently. Our empirical study using both real data and synthetic data demonstrates that our method is effective and efficient.",
"title": ""
},
{
"docid": "ddb804eec29ebb8d7f0c80223184305a",
"text": "Near Field Communication (NFC) enables physically proximate devices to communicate over very short ranges in a peer-to-peer manner without incurring complex network configuration overheads. However, adoption of NFC-enabled applications has been stymied by the low levels of penetration of NFC hardware. In this paper, we address the challenge of enabling NFC-like capability on the existing base of mobile phones. To this end, we develop Dhwani, a novel, acoustics-based NFC system that uses the microphone and speakers on mobile phones, thus eliminating the need for any specialized NFC hardware. A key feature of Dhwani is the JamSecure technique, which uses self-jamming coupled with self-interference cancellation at the receiver, to provide an information-theoretically secure communication channel between the devices. Our current implementation of Dhwani achieves data rates of up to 2.4 Kbps, which is sufficient for most existing NFC applications.",
"title": ""
},
{
"docid": "d31c6830ee11fc73b53c7930ad0e638f",
"text": "This paper proposes two rectangular ring planar monopole antennas for wideband and ultra-wideband applications. Simple planar rectangular rings are used to design the planar antennas. These rectangular rings are designed in a way to achieve the wideband operations. The operating frequency band ranges from 1.85 GHz to 4.95 GHz and 3.12 GHz to 14.15 GHz. The gain varies from 1.83 dBi to 2.89 dBi for rectangular ring wideband antenna and 1.89 dBi to 5.2 dBi for rectangular ring ultra-wideband antenna. The design approach and the results are discussed.",
"title": ""
},
{
"docid": "ac1e1d7daed4a960ff3a17a03155ddfa",
"text": "This paper explores the role of the business model in capturing value from early stage technology. A successful business model creates a heuristic logic that connects technical potential with the realization of economic value. The business model unlocks latent value from a technology, but its logic constrains the subsequent search for new, alternative models for other technologies later on—an implicit cognitive dimension overlooked in most discourse on the topic. We explore the intellectual roots of the concept, offer a working definition and show how the Xerox Corporation arose by employing an effective business model to commercialize a technology rejected by other leading companies of the day. We then show the long shadow that this model cast upon Xerox’s later management of selected spin-off companies from Xerox PARC. Xerox evaluated the technical potential of these spin-offs through its own business model, while those spin-offs that became successful did so through evolving business models that came to differ substantially from that of Xerox. The search and learning for an effective business model in failed ventures, by contrast, were quite limited.",
"title": ""
},
{
"docid": "12b075837d52d5c73a155466c28f2996",
"text": "Banks in Nigeria need to understand the perceptual difference in both male and female employees to better develop adequate policy on sexual harassment. This study investigated the perceptual differences on sexual harassment among male and female bank employees in two commercial cities (Kano and Lagos) of Nigeria.Two hundred and seventy five employees (149 males, 126 females) were conveniently sampled for this study. A survey design with a questionnaire adapted from Sexual Experience Questionnaire (SEQ) comprises of three dimension scalesof sexual harassment was used. The hypotheses were tested with independent samples t-test. The resultsindicated no perceptual differences in labelling sexual harassment clues between male and female bank employees in Nigeria. Thus, the study recommends that bank managers should support and establish the tone for sexual harassment-free workplace. KeywordsGender Harassment, Sexual Coercion, Unwanted Sexual Attention, Workplace.",
"title": ""
}
] | scidocsrr |
7b2ed986ed98f67cdc3456f543a73f54 | In-DBMS Sampling-based Sub-trajectory Clustering | [
{
"docid": "03aba9a44f1ee13cc7f16aadbebb7165",
"text": "The increasing pervasiveness of location-acquisition technologies has enabled collection of huge amount of trajectories for almost any kind of moving objects. Discovering useful patterns from their movement behaviors can convey valuable knowledge to a variety of critical applications. In this light, we propose a novel concept, called gathering, which is a trajectory pattern modeling various group incidents such as celebrations, parades, protests, traffic jams and so on. A key observation is that these incidents typically involve large congregations of individuals, which form durable and stable areas with high density. In this work, we first develop a set of novel techniques to tackle the challenge of efficient discovery of gathering patterns on archived trajectory dataset. Afterwards, since trajectory databases are inherently dynamic in many real-world scenarios such as traffic monitoring, fleet management and battlefield surveillance, we further propose an online discovery solution by applying a series of optimization schemes, which can keep track of gathering patterns while new trajectory data arrive. Finally, the effectiveness of the proposed concepts and the efficiency of the approaches are validated by extensive experiments based on a real taxicab trajectory dataset.",
"title": ""
}
] | [
{
"docid": "2089f931cf6fca595898959cbfbca28a",
"text": "Continuum robotic manipulators articulate due to their inherent compliance. Tendon actuation leads to compression of the manipulator, extension of the actuators, and is limited by the practical constraint that tendons cannot support compression. In light of these observations, we present a new linear model for transforming desired beam configuration to tendon displacements and vice versa. We begin from first principles in solid mechanics by analyzing the effects of geometrically nonlinear tendon loads. These loads act both distally at the termination point and proximally along the conduit contact interface. The resulting model simplifies to a linear system including only the bending and axial modes of the manipulator as well as the actuator compliance. The model is then manipulated to form a concise mapping from beam configuration-space parameters to n redundant tendon displacements via the internal loads and strains experienced by the system. We demonstrate the utility of this model by implementing an optimal feasible controller. The controller regulates axial strain to a constant value while guaranteeing positive tendon forces and minimizing their magnitudes over a range of articulations. The mechanics-based model from this study provides insight as well as performance gains for this increasingly ubiquitous class of manipulators.",
"title": ""
},
{
"docid": "c551e19208e367cc5546a3d46f7534c8",
"text": "We propose a novel approach for solving the approximate nearest neighbor search problem in arbitrary metric spaces. The distinctive feature of our approach is that we can incrementally build a non-hierarchical distributed structure for given metric space data with a logarithmic complexity scaling on the size of the structure and adjustable accuracy probabilistic nearest neighbor queries. The structure is based on a small world graph with vertices corresponding to the stored elements, edges for links between them and the greedy algorithm as base algorithm for searching. Both search and addition algorithms require only local information from the structure. The performed simulation for data in the Euclidian space shows that the structure built using the proposed algorithm has navigable small world properties with logarithmic search complexity at fixed accuracy and has weak (power law) scalability with the dimensionality of the stored data.",
"title": ""
},
{
"docid": "880aa3de3b839739927cbd82b7abcf8a",
"text": "Can parents burn out? The aim of this research was to examine the construct validity of the concept of parental burnout and to provide researchers which an instrument to measure it. We conducted two successive questionnaire-based online studies, the first with a community-sample of 379 parents using principal component analyses and the second with a community- sample of 1,723 parents using both principal component analyses and confirmatory factor analyses. We investigated whether the tridimensional structure of the burnout syndrome (i.e., exhaustion, inefficacy, and depersonalization) held in the parental context. We then examined the specificity of parental burnout vis-à-vis professional burnout assessed with the Maslach Burnout Inventory, parental stress assessed with the Parental Stress Questionnaire and depression assessed with the Beck Depression Inventory. The results support the validity of a tri-dimensional burnout syndrome including exhaustion, inefficacy and emotional distancing with, respectively, 53.96 and 55.76% variance explained in study 1 and study 2, and reliability ranging from 0.89 to 0.94. The final version of the Parental Burnout Inventory (PBI) consists of 22 items and displays strong psychometric properties (CFI = 0.95, RMSEA = 0.06). Low to moderate correlations between parental burnout and professional burnout, parental stress and depression suggests that parental burnout is not just burnout, stress or depression. The prevalence of parental burnout confirms that some parents are so exhausted that the term \"burnout\" is appropriate. The proportion of burnout parents lies somewhere between 2 and 12%. The results are discussed in light of their implications at the micro-, meso- and macro-levels.",
"title": ""
},
{
"docid": "9441113599194d172b6f618058b2ba88",
"text": "Vegetable quality is frequently referred to size, shape, mass, firmness, color and bruises from which fruits can be classified and sorted. However, technological by small and middle producers implementation to assess this quality is unfeasible, due to high costs of software, equipment as well as operational costs. Based on these considerations, the proposal of this research is to evaluate a new open software that enables the classification system by recognizing fruit shape, volume, color and possibly bruises at a unique glance. The software named ImageJ, compatible with Windows, Linux and MAC/OS, is quite popular in medical research and practices, and offers algorithms to obtain the above mentioned parameters. The software allows calculation of volume, area, averages, border detection, image improvement and morphological operations in a variety of image archive formats as well as extensions by means of “plugins” written in Java.",
"title": ""
},
{
"docid": "997a1ec16394a20b3a7f2889a583b09d",
"text": "This second article of our series looks at the process of designing a survey. The design process begins with reviewing the objectives, examining the target population identified by the objectives, and deciding how best to obtain the information needed to address those objectives. However, we also need to consider factors such as determining the appropriate sample size and ensuring the largest possible response rate.To illustrate our ideas, we use the three surveys described in Part 1 of this series to suggest good and bad practice in software engineering survey research.",
"title": ""
},
{
"docid": "2e3c1fc6daa33ee3a4dc3fe1e11a3c21",
"text": "Cloud computing technologies have matured enough that the service providers are compelled to migrate their services to virtualized infrastructure in cloud data centers. However, moving the computation and network to shared physical infrastructure poses a multitude of questions, both for service providers and for data center owners. In this work, we propose HyViDE - a framework for optimal placement of multiple virtual data center networks on a physical data center network. HyViDE preselects a subset of virtual data center network requests and uses a hybrid strategy for embedding them on the physical data center. Coordinated static and dynamic embedding algorithms are used in this hybrid framework to minimize the rejection of requests and fulfill QoS demands of the embedded networks. HyViDE can employ suitable static and dynamic strategies to meet the objectives of data center owners and customers. Experimental evaluation of our algorithms on HyViDE shows that, the acceptance rate is high with faster servicing of requests.",
"title": ""
},
{
"docid": "1583d8c41b15fb77787deef955ace886",
"text": "The primary focus of autonomous driving research is to improve driving accuracy. While great progress has been made, state-of-the-art algorithms still fail at times. Such failures may have catastrophic consequences. It therefore is im- portant that automated cars foresee problems ahead as early as possible. This is also of paramount importance if the driver will be asked to take over. We conjecture that failures do not occur randomly. For instance, driving models may fail more likely at places with heavy traffic, at complex intersections, and/or under adverse weather/illumination conditions. This work presents a method to learn to predict the occurrence of these failures, i.e., to assess how difficult a scene is to a given driving model and to possibly give the human driver an early headsup. A camera- based driving model is developed and trained over real driving datasets. The discrepancies between the model's predictions and the human 'ground-truth' maneuvers were then recorded, to yield the 'failure' scores. Experimental results show that the failure score can indeed be learned and predicted. Thus, our prediction method is able to improve the overall safety of an automated driving model by alerting the human driver timely, leading to better human-vehicle collaborative driving.",
"title": ""
},
{
"docid": "f81059b5ff3d621dfa9babc8e68bc0ab",
"text": "A zero voltage switching (ZVS) isolated Sepic converter with active clamp topology is presented. The buck-boost type of active clamp is connected in parallel with the primary side of the transformer to absorb all the energy stored in the transformer leakage inductance and to limit the peak voltage on the switching device. During the transition interval between the main and auxiliary switches, the resonance based on the output capacitor of switch and the transformer leakage inductor can achieve ZVS for both switches. The operational principle, steady state analysis and design consideration of the proposed converter are presented. Finally, the proposed converter is verified by the experimental results based on an 180 W prototype circuit.",
"title": ""
},
{
"docid": "c57c69fd1858b50998ec9706e34f6c46",
"text": "Hashing has recently attracted considerable attention for large scale similarity search. However, learning compact codes with good performance is still a challenge. In many cases, the real-world data lies on a low-dimensional manifold embedded in high-dimensional ambient space. To capture meaningful neighbors, a compact hashing representation should be able to uncover the intrinsic geometric structure of the manifold, e.g., the neighborhood relationships between subregions. Most existing hashing methods only consider this issue during mapping data points into certain projected dimensions. When getting the binary codes, they either directly quantize the projected values with a threshold, or use an orthogonal matrix to refine the initial projection matrix, which both consider projection and quantization separately, and will not well preserve the locality structure in the whole learning process. In this paper, we propose a novel hashing algorithm called Locality Preserving Hashing to effectively solve the above problems. Specifically, we learn a set of locality preserving projections with a joint optimization framework, which minimizes the average projection distance and quantization loss simultaneously. Experimental comparisons with other state-of-the-art methods on two large scale datasets demonstrate the effectiveness and efficiency of our method.",
"title": ""
},
{
"docid": "fd32f2117ae01049314a0c1cfb565724",
"text": "Smart phones, tablets, and the rise of the Internet of Things are driving an insatiable demand for wireless capacity. This demand requires networking and Internet infrastructures to evolve to meet the needs of current and future multimedia applications. Wireless HetNets will play an important role toward the goal of using a diverse spectrum to provide high quality-of-service, especially in indoor environments where most data are consumed. An additional tier in the wireless HetNets concept is envisioned using indoor gigabit small-cells to offer additional wireless capacity where it is needed the most. The use of light as a new mobile access medium is considered promising. In this article, we describe the general characteristics of WiFi and VLC (or LiFi) and demonstrate a practical framework for both technologies to coexist. We explore the existing research activity in this area and articulate current and future research challenges based on our experience in building a proof-of-concept prototype VLC HetNet.",
"title": ""
},
{
"docid": "638c9e4ba1c3d35fdb766c17b188529d",
"text": "Association football is a popular sport, but it is also a big business. From a managerial perspective, the most important decisions that team managers make concern player transfers, so issues related to player valuation, especially the determination of transfer fees and market values, are of major concern. Market values can be understood as estimates of transfer fees—that is, prices that could be paid for a player on the football market—so they play an important role in transfer negotiations. These values have traditionally been estimated by football experts, but crowdsourcing has emerged as an increasingly popular approach to estimating market value. While researchers have found high correlations between crowdsourced market values and actual transfer fees, the process behind crowd judgments is not transparent, crowd estimates are not replicable, and they are updated infrequently because they require the participation of many users. Data analytics may thus provide a sound alternative or a complementary approach to crowd-based estimations of market value. Based on a unique data set that is comprised of 4217 players from the top five European leagues and a period of six playing seasons, we estimate players’ market values using multilevel regression analysis. The regression results suggest that data-driven estimates of market value can overcome several of the crowd’s practical limitations while producing comparably accurate numbers. Our results have important implications for football managers and scouts, as data analytics facilitates precise, objective, and reliable estimates of market value that can be updated at any time. © 2017 The Author(s). Published by Elsevier B.V. This is an open access article under the CC BY-NC-ND license. ( http://creativecommons.org/licenses/by-nc-nd/4.0/ )",
"title": ""
},
{
"docid": "5dda89fbe7f5757588b5dff0e6c2565d",
"text": "Introductory psychology students (120 females and 120 males) rated attractiveness and fecundity of one of six computer-altered female gures representing three body-weight categories (underweight, normal weight and overweight) and two levels of waist-to-hip ratio (WHR), one in the ideal range (0.72) and one in the non-ideal range (0.86). Both females and males judged underweight gures to be more attractive than normal or overweight gures, regardless of WHR. The female gure with the high WHR (0.86) was judged to be more attractive than the gure with the low WHR (0.72) across all body-weight conditions. Analyses of fecundity ratings revealed an interaction between weight and WHR such that the models did not differ in the normal weight category, but did differ in the underweight (model with WHR of 0.72 was less fecund) and overweight (model with WHR of 0.86 was more fecund) categories. These ndings lend stronger support to sociocultural rather than evolutionary hypotheses.",
"title": ""
},
{
"docid": "a492dcdbb9ec095cdfdab797c4b4e659",
"text": "We present a new class of methods for high-dimensional nonparametric regression and classification called sparse additive models (SpAM). Our methods combine ideas from sparse linear modeling and additive nonparametric regression. We derive an algorithm for fitting the models that is practical and effective even when the number of covariates is larger than the sample size. SpAM is essentially a functional version of the grouped lasso of Yuan and Lin (2006). SpAM is also closely related to the COSSO model of Lin and Zhang (2006), but decouples smoothing and sparsity, enabling the use of arbitrary nonparametric smoothers. We give an analysis of the theoretical properties of sparse additive models, and present empirical results on synthetic and real data, showing that SpAM can be effective in fitting sparse nonparametric models in high dimensional data.",
"title": ""
},
{
"docid": "813b4607e9675ad4811ba181a912bbe9",
"text": "The end-Permian mass extinction was the most severe biodiversity crisis in Earth history. To better constrain the timing, and ultimately the causes of this event, we collected a suite of geochronologic, isotopic, and biostratigraphic data on several well-preserved sedimentary sections in South China. High-precision U-Pb dating reveals that the extinction peak occurred just before 252.28 ± 0.08 million years ago, after a decline of 2 per mil (‰) in δ(13)C over 90,000 years, and coincided with a δ(13)C excursion of -5‰ that is estimated to have lasted ≤20,000 years. The extinction interval was less than 200,000 years and synchronous in marine and terrestrial realms; associated charcoal-rich and soot-bearing layers indicate widespread wildfires on land. A massive release of thermogenic carbon dioxide and/or methane may have caused the catastrophic extinction.",
"title": ""
},
{
"docid": "fe94febc520eab11318b49391d46476b",
"text": "BACKGROUND\nDiabetes is a chronic disease, with high prevalence across many nations, which is characterized by elevated level of blood glucose and risk of acute and chronic complication. The Kingdom of Saudi Arabia (KSA) has one of the highest levels of diabetes prevalence globally. It is well-known that the treatment of diabetes is complex process and requires both lifestyle change and clear pharmacologic treatment plan. To avoid the complication from diabetes, the effective behavioural change and extensive education and self-management is one of the key approaches to alleviate such complications. However, this process is lengthy and expensive. The recent studies on the user of smart phone technologies for diabetes self-management have proven to be an effective tool in controlling hemoglobin (HbA1c) levels especially in type-2 diabetic (T2D) patients. However, to date no reported study addressed the effectiveness of this approach in the in Saudi patients. This study investigates the impact of using mobile health technologies for the self-management of diabetes in Saudi Arabia.\n\n\nMETHODS\nIn this study, an intelligent mobile diabetes management system (SAED), tailored for T2D patients in KSA was developed. A pilot study of the SAED system was conducted in Saudi Arabia with 20 diabetic patients for 6 months duration. The patients were randomly categorized into a control group who did not use the SAED system and an intervention group whom used the SAED system for their diabetes management during this period. At the end of the follow-up period, the HbA1c levels in the patients in both groups were measure together with a diabetes knowledge test was also conducted to test the diabetes awareness of the patients.\n\n\nRESULTS\nThe results of SAED pilot study showed that the patients in the intervention group were able to significantly decrease their HbA1c levels compared to the control group. The SAED system also enhanced the diabetes awareness amongst the patients in the intervention group during the trial period. These outcomes confirm the global studies on the effectiveness of smart phone technologies in diabetes management. The significance of the study is that this was one of the first such studies conducted on Saudi patients and of their acceptance for such technology in their diabetes self-management treatment plans.\n\n\nCONCLUSIONS\nThe pilot study of the SAED system showed that a mobile health technology can significantly improve the HbA1C levels among Saudi diabetic and improve their disease management plans. The SAED system can also be an effective and low-cost solution in improving the quality of life of diabetic patients in the Kingdom considering the high level of prevalence and the increasing economic burden of this disease.",
"title": ""
},
{
"docid": "98d40e5a6df5b6a3ab39a04bf04c6a65",
"text": "T Internet has increased the flexibility of retailers, allowing them to operate an online arm in addition to their physical stores. The online channel offers potential benefits in selling to customer segments that value the convenience of online shopping, but it also raises new challenges. These include the higher likelihood of costly product returns when customers’ ability to “touch and feel” products is important in determining fit. We study competing retailers that can operate dual channels (“bricks and clicks”) and examine how pricing strategies and physical store assistance levels change as a result of the additional Internet outlet. A central result we obtain is that when differentiation among competing retailers is not too high, having an online channel can actually increase investment in store assistance levels (e.g., greater shelf display, more-qualified sales staff, floor samples) and decrease profits. Consequently, when the decision to open an Internet channel is endogenized, there can exist an asymmetric equilibrium where only one retailer elects to operate an online arm but earns lower profits than its bricks-only rival. We also characterize equilibria where firms open an online channel, even though consumers only use it for research and learning purposes but buy in stores. A number of extensions are discussed, including retail settings where firms carry multiple product categories, shipping and handling costs, and the role of store assistance in impacting consumer perceived benefits.",
"title": ""
},
{
"docid": "ecd7fca4f2ea0207582755a2b9733419",
"text": "This work introduces a novel framework for quantifying the presence and strength of recurrent dynamics in video data. Specifically, we provide continuous measures of periodicity (perfect repetition) and quasiperiodicity (superposition of periodic modes with non-commensurate periods), in a way which does not require segmentation, training, object tracking or 1-dimensional surrogate signals. Our methodology operates directly on video data. The approach combines ideas from nonlinear time series analysis (delay embeddings) and computational topology (persistent homology), by translating the problem of finding recurrent dynamics in video data, into the problem of determining the circularity or toroidality of an associated geometric space. Through extensive testing, we show the robustness of our scores with respect to several noise models/levels; we show that our periodicity score is superior to other methods when compared to human-generated periodicity rankings; and furthermore, we show that our quasiperiodicity score clearly indicates the presence of biphonation in videos of vibrating vocal folds, which has never before been accomplished end to end quantitatively.",
"title": ""
},
{
"docid": "2a89fb135d7c53bda9b1e3b8598663a5",
"text": "We propose a new equilibrium enforcing method paired with a loss derived from the Wasserstein distance for training auto-encoder based Generative Adversarial Networks. This method balances the generator and discriminator during training. Additionally, it provides a new approximate convergence measure, fast and stable training and high visual quality. We also derive a way of controlling the trade-off between image diversity and visual quality. We focus on the image generation task, setting a new milestone in visual quality, even at higher resolutions. This is achieved while using a relatively simple model architecture and a standard training procedure.",
"title": ""
},
{
"docid": "850a7daa56011e6c53b5f2f3e33d4c49",
"text": "Multi-objective evolutionary algorithms (MOEAs) have achieved great progress in recent decades, but most of them are designed to solve unconstrained multi-objective optimization problems. In fact, many real-world multi-objective problems usually contain a number of constraints. To promote the research of constrained multi-objective optimization, we first propose three primary types of difficulty, which reflect the challenges in the real-world optimization problems, to characterize the constraint functions in CMOPs, including feasibility-hardness, convergencehardness and diversity-hardness. We then develop a general toolkit to construct difficulty adjustable and scalable constrained multi-objective optimization problems (CMOPs) with three types of parameterized constraint functions according to the proposed three primary types of difficulty. In fact, combination of the three primary constraint functions with different parameters can lead to construct a large variety of CMOPs, whose difficulty can be uniquely defined by a triplet with each of its parameter specifying the level of each primary difficulty type respectively. Furthermore, the number of objectives in this toolkit are able to scale to more than two. Based on this toolkit, we suggest nine difficulty adjustable and scalable CMOPs named DAS-CMOP1-9. To evaluate the proposed test problems, two popular CMOEAs MOEA/D-CDP and NSGA-II-CDP are adopted to test their performances on DAS-CMOP1-9 with different difficulty triplets. The experiment results demonstrate that none of them can solve these problems efficiently, which stimulate us to develop new constrained MOEAs to solve the suggested DAS-CMOPs.",
"title": ""
},
{
"docid": "dc54b73eb740bc1bbdf1b834a7c40127",
"text": "This paper discusses the design and evaluation of an online social network used within twenty-two established after school programs across three major urban areas in the Northeastern United States. The overall goal of this initiative is to empower students in grades K-8 to prevent obesity through healthy eating and exercise. The online social network was designed to support communication between program participants. Results from the related evaluation indicate that the online social network has potential for advancing awareness and community action around health related issues; however, greater attention is needed to professional development programs for program facilitators, and design features could better support critical thinking, social presence, and social activity.",
"title": ""
}
] | scidocsrr |
80ca22a5818ededa1b9e2126bf539f34 | Dataset, Ground-Truth and Performance Metrics for Table Detection Evaluation | [
{
"docid": "bd963a55c28304493118028fe5f47bab",
"text": "Tables are a common structuring element in many documents, s uch as PDF files. To reuse such tables, appropriate methods need to b e develop, which capture the structure and the content information. We have d e loped several heuristics which together recognize and decompose tables i n PDF files and store the extracted data in a structured data format (XML) for easi er reuse. Additionally, we implemented a prototype, which gives the user the ab ility of making adjustments on the extracted data. Our work shows that purel y heuristic-based approaches can achieve good results, especially for lucid t ables.",
"title": ""
},
{
"docid": "823c0e181286d917a610f90d1c9db0c3",
"text": "Table characteristics vary widely. Consequently, a great variety of computational approaches have been applied to table recognition. In this survey, the table recognition literature is presented as an interaction of table models, observations, transformations and inferences. A table model defines the physical and logical structure of tables; the model is used to detect tables, and to analyze and decompose the detected tables. Observations perform feature measurements and data lookup, transformations alter or restructure data, and inferences generate and test hypotheses. This presentation clarifies the decisions that are made by a table recognizer, and the assumptions and inferencing techniques that underlie these decisions.",
"title": ""
}
] | [
{
"docid": "56642ffad112346186a5c3f12133e59b",
"text": "The Skills for Inclusive Growth (S4IG) program is an initiative of the Australian Government’s aid program and implemented with the Sri Lankan Ministry of Skills Development and Vocational Training, Tourism Authorities, Provincial and District Level Government, Industry and Community Organisations. The Program will demonstrate how an integrated approach to skills development can support inclusive economic growth opportunities along the tourism value chain in the four districts of Trincomalee, Ampara, Batticaloa (Eastern Province) and Polonnaruwa (North Central Province). In doing this the S4IG supports sustainable job creation and increased incomes and business growth for the marginalised and the disadvantaged, particularly women and people with disabilities.",
"title": ""
},
{
"docid": "f3ca98a8e0600f0c80ef539cfc58e77e",
"text": "In this paper, we address a real life waste collection vehicle routing problem with time windows (VRPTW) with consideration of multiple disposal trips and drivers’ lunch breaks. Solomon’s well-known insertion algorithm is extended for the problem. While minimizing the number of vehicles and total traveling time is the major objective of vehicle routing problems in the literature, here we also consider the route compactness and workload balancing of a solution since they are very important aspects in practical applications. In order to improve the route compactness and workload balancing, a capacitated clustering-based waste collection VRPTW algorithm is developed. The proposed algorithms have been successfully implemented and deployed for the real life waste collection problems at Waste Management, Inc. A set of waste collection VRPTW benchmark problems is also presented in this paper. Waste collection problems are frequently considered as arc routing problems without time windows. However, that point of view can be applied only to residential waste collection problems. In the waste collection industry, there are three major areas: commercial waste collection, residential waste collection and roll-on-roll-off. In this paper, we mainly focus on the commercial waste collection problem. The problem can be characterized as a variant of VRPTW since commercial waste collection stops may have time windows. The major variation from a standard VRPTW is due to disposal operations and driver’s lunch break. When a vehicle is full, it needs to go to one of the disposal facilities (landfill or transfer station). Each vehicle can, and typically does, make multiple disposal trips per day. The purpose of this paper is to introduce the waste collection VRPTW, benchmark problem sets, and a solution approach for the problem. The proposed algorithms have been successfully implemented and deployed for the real life waste collection problems of Waste Management, the leading provider of comprehensive waste management services in North America with nearly 26,000 collection and transfer vehicles. 2005 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "1655b927fa07bed8bf3769bf2dba01b6",
"text": "The non-central chi-square distribution plays an important role in communications, for example in the analysis of mobile and wireless communication systems. It not only includes the important cases of a squared Rayleigh distribution and a squared Rice distribution, but also the generalizations to a sum of independent squared Gaussian random variables of identical variance with or without mean, i.e., a \"squared MIMO Rayleigh\" and \"squared MIMO Rice\" distribution. In this paper closed-form expressions are derived for the expectation of the logarithm and for the expectation of the n-th power of the reciprocal value of a non-central chi-square random variable. It is shown that these expectations can be expressed by a family of continuous functions gm(ldr) and that these families have nice properties (monotonicity, convexity, etc.). Moreover, some tight upper and lower bounds are derived that are helpful in situations where the closed-form expression of gm(ldr) is too complex for further analysis.",
"title": ""
},
{
"docid": "a27d4083741f75f44cd85a8161f1b8b1",
"text": "Graves’ disease (GD) and Hashimoto's thyroiditis (HT) represent the commonest forms of autoimmune thyroid disease (AITD) each presenting with distinct clinical features. Progress has been made in determining association of HLA class II DRB1, DQB1 and DQA1 loci with GD demonstrating a predisposing effect for DR3 (DRB1*03-DQB1*02-DQA1*05) and a protective effect for DR7 (DRB1*07-DQB1*02-DQA1*02). Small data sets have hindered progress in determining HLA class II associations with HT. The aim of this study was to investigate DRB1-DQB1-DQA1 in the largest UK Caucasian HT case control cohort to date comprising 640 HT patients and 621 controls. A strong association between HT and DR4 (DRB1*04-DQB1*03-DQA1*03) was detected (P=6.79 × 10−7, OR=1.98 (95% CI=1.51–2.59)); however, only borderline association of DR3 was found (P=0.050). Protective effects were also detected for DR13 (DRB1*13-DQB1*06-DQA1*01) (P=0.001, OR=0.61 (95% CI=0.45–0.83)) and DR7 (P=0.013, OR=0.70 (95% CI=0.53–0.93)). Analysis of our unique cohort of subjects with well characterized AITD has demonstrated clear differences in association within the HLA class II region between HT and GD. Although HT and GD share a number of common genetic markers this study supports the suggestion that differences in HLA class II genotype may, in part, contribute to the different immunopathological processes and clinical presentation of these related diseases.",
"title": ""
},
{
"docid": "df10984391cfb52e8ece9ae3766754c1",
"text": "A major challenge that arises in Weakly Supervised Object Detection (WSOD) is that only image-level labels are available, whereas WSOD trains instance-level object detectors. A typical approach to WSOD is to 1) generate a series of region proposals for each image and assign the image-level label to all the proposals in that image; 2) train a classifier using all the proposals; and 3) use the classifier to select proposals with high confidence scores as the positive instances for another round of training. In this way, the image-level labels are iteratively transferred to instance-level labels.\n We aim to resolve the following two fundamental problems within this paradigm. First, existing proposal generation algorithms are not yet robust, thus the object proposals are often inaccurate. Second, the selected positive instances are sometimes noisy and unreliable, which hinders the training at subsequent iterations. We adopt two separate neural networks, one to focus on each problem, to better utilize the specific characteristic of region proposal refinement and positive instance selection. Further, to leverage the mutual benefits of the two tasks, the two neural networks are jointly trained and reinforced iteratively in a progressive manner, starting with easy and reliable instances and then gradually incorporating difficult ones at a later stage when the selection classifier is more robust. Extensive experiments on the PASCAL VOC dataset show that our method achieves state-of-the-art performance.",
"title": ""
},
{
"docid": "3f569eccc71c6186d6163a2cc40be0fc",
"text": "Deep Packet Inspection (DPI) is the state-of-the-art technology for traffic classification. According to the conventional wisdom, DPI is the most accurate classification technique. Consequently, most popular products, either commercial or open-source, rely on some sort of DPI for traffic classification. However, the actual performance of DPI is still unclear to the research community, since the lack of public datasets prevent the comparison and reproducibility of their results. This paper presents a comprehensive comparison of 6 well-known DPI tools, which are commonly used in the traffic classification literature. Our study includes 2 commercial products (PACE and NBAR) and 4 open-source tools (OpenDPI, L7-filter, nDPI, and Libprotoident). We studied their performance in various scenarios (including packet and flow truncation) and at different classification levels (application protocol, application and web service). We carefully built a labeled dataset with more than 750 K flows, which contains traffic from popular applications. We used the Volunteer-Based System (VBS), developed at Aalborg University, to guarantee the correct labeling of the dataset. We released this dataset, including full packet payloads, to the research community. We believe this dataset could become a common benchmark for the comparison and validation of network traffic classifiers. Our results present PACE, a commercial tool, as the most accurate solution. Surprisingly, we find that some open-source tools, such as nDPI and Libprotoident, also achieve very high accuracy.",
"title": ""
},
{
"docid": "eea8a23547ea5a29be036285034fc0a0",
"text": "Co-fabrication of a nanoscale vacuum field emission transistor (VFET) and a metal-oxide-semiconductor field effect transistor (MOSFET) is demonstrated on a silicon-on-insulator wafer. The insulated-gate VFET with a gap distance of 100 nm is achieved by using a conventional 0.18-μm process technology and subsequent photoresist ashing process. The VFET shows a turn-on voltage of 2 V at a cell current of 2 nA and a cell current of 3 μA at the operation voltage of 10 V with an ON/OFF current ratio of 104. The gap distance between the cathode and anode in the VFET is defined to be less than the mean free path of electrons in air, and consequently, the operation voltage is reduced to be less than the ionization potential of air molecules. This allows the relaxation of the vacuum requirement. The present integration scheme can be useful as it combines the advantages of both structures on the same chip.",
"title": ""
},
{
"docid": "0ad47e79e9bea44a76029e1f24f0a16c",
"text": "The requirements for OLTP database systems are becoming ever more demanding. New OLTP applications require high degrees of scalability with controlled transaction latencies in in-memory databases. Deployments of these applications require low-level control of database system overhead and program-to-data affinity to maximize resource utilization in modern machines. Unfortunately, current solutions fail to meet these requirements. First, existing database solutions fail to expose a high-level programming abstraction in which latency of transactions can be reasoned about by application developers. Second, these solutions limit infrastructure engineers in exercising low-level control on the deployment of the system on a target infrastructure, further impacting performance. In this paper, we propose a relational actor programming model for in-memory databases. Conceptually, relational actors, or reactors for short, are application-defined, isolated logical actors encapsulating relations that process function calls asynchronously. Reactors ease reasoning about correctness by guaranteeing serializability of application-level function calls. In contrast to classic transactional models, however, reactors allow developers to take advantage of intra-transaction parallelism to reduce latency and improve performance. Moreover, reactors enable a new degree of flexibility in database deployment. We present REACTDB, a novel system design exposing reactors that allows for flexible virtualization of database architecture between the extremes of shared-nothing and shared-everything without changes to application code. Our experiments with REACTDB illustrate performance predictability, multi-core scalability, and low overhead in OLTP benchmarks.",
"title": ""
},
{
"docid": "4592c8f5758ccf20430dbec02644c931",
"text": "Taylor & Francis makes every effort to ensure the accuracy of all the information (the “Content”) contained in the publications on our platform. However, Taylor & Francis, our agents, and our licensors make no representations or warranties whatsoever as to the accuracy, completeness, or suitability for any purpose of the Content. Any opinions and views expressed in this publication are the opinions and views of the authors, and are not the views of or endorsed by Taylor & Francis. The accuracy of the Content should not be relied upon and should be independently verified with primary sources of information. Taylor and Francis shall not be liable for any losses, actions, claims, proceedings, demands, costs, expenses, damages, and other liabilities whatsoever or howsoever caused arising directly or indirectly in connection with, in relation to or arising out of the use of the Content.",
"title": ""
},
{
"docid": "97f2e2ceeb4c1e2b8d8fbc8a46159730",
"text": "Novel scientific knowledge is constantly produced by the scientific community. Understanding the level of novelty characterized by scientific literature is key for modeling scientific dynamics and analyzing the growth mechanisms of scientific knowledge. Metrics derived from bibliometrics and citation analysis were effectively used to characterize the novelty in scientific development. However, time is required before we can observe links between documents such as citation links or patterns derived from the links, which makes these techniques more effective for retrospective analysis than predictive analysis. In this study, we present a new approach to measuring the novelty of a research topic in a scientific community over a specific period by tracking semantic changes of the terms and characterizing the research topic in their usage context. The semantic changes are derived from the text data of scientific literature by temporal embedding learning techniques. We validated the effects of the proposed novelty metric on predicting the future growth of scientific publications and investigated the relations between novelty and growth by panel data analysis applied in a largescale publication dataset (MEDLINE/PubMed). Key findings based on the statistical investigation indicate that the novelty metric has significant predictive effects on the growth of scientific literature and the predictive effects may last for more than ten years. We demonstrated the effectiveness and practical implications of the novelty metric in three case studies. ∗[email protected], [email protected]. Department of Information Science, Drexel University. 1 ar X iv :1 80 1. 09 12 1v 1 [ cs .D L ] 2 7 Ja n 20 18",
"title": ""
},
{
"docid": "39a63943fdc69942088fab0e5e7131f2",
"text": "Deep reinforcement learning (RL) has proven a powerful technique in many sequential decision making domains. However, Robotics poses many challenges for RL, most notably training on a physical system can be expensive and dangerous, which has sparked significant interest in learning control policies using a physics simulator. While several recent works have shown promising results in transferring policies trained in simulation to the real world, they often do not fully utilize the advantage of working with a simulator. In this work, we exploit the full state observability in the simulator to train better policies which take as input only partial observations (RGBD images). We do this by employing an actor-critic training algorithm in which the critic is trained on full states while the actor (or policy) gets rendered images as input. We show experimentally on a range of simulated tasks that using these asymmetric inputs significantly improves performance. Finally, we combine this method with domain randomization and show real robot experiments for several tasks like picking, pushing, and moving a block. We achieve this simulation to real world transfer without training on any real world data. Videos of these experiments can be found at www.goo.gl/b57WTs.",
"title": ""
},
{
"docid": "99c1ad04419fa0028724a26e757b1b90",
"text": "Contrary to popular belief, despite decades of research in fingerprints, reliable fingerprint recognition is still an open problem. Extracting features out of poor quality prints is the most challenging problem faced in this area. This paper introduces a new approach for fingerprint enhancement based on Short Time Fourier Transform(STFT) Analysis. STFT is a well known technique in signal processing to analyze non-stationary signals. Here we extend its application to 2D fingerprint images. The algorithm simultaneously estimates all the intrinsic properties of the fingerprints such as the foreground region mask, local ridge orientation and local ridge frequency. Furthermore we propose a probabilistic approach of robustly estimating these parameters. We experimentally compare the proposed approach to other filtering approaches in literature and show that our technique performs favorably.",
"title": ""
},
{
"docid": "4c49cebd579b2fef196d7ce600b1a044",
"text": "A GPU cluster is a cluster equipped with GPU devices. Excellent acceleration is achievable for computation-intensive tasks (e. g. matrix multiplication and LINPACK) and bandwidth-intensive tasks with data locality (e. g. finite-difference simulation). Bandwidth-intensive tasks such as large-scale FFTs without data locality are harder to accelerate, as the bottleneck often lies with the PCI between main memory and GPU device memory or the communication network between workstation nodes. That means optimizing the performance of FFT for a single GPU device will not improve the overall performance. This paper uses large-scale FFT as an example to show how to achieve substantial speedups for these more challenging tasks on a GPU cluster. Three GPU-related factors lead to better performance: firstly the use of GPU devices improves the sustained memory bandwidth for processing large-size data; secondly GPU device memory allows larger subtasks to be processed in whole and hence reduces repeated data transfers between memory and processors; and finally some costly main-memory operations such as matrix transposition can be significantly sped up by GPUs if necessary data adjustment is performed during data transfers. This technique of manipulating array dimensions during data transfer is the main technical contribution of this paper. These factors (as well as the improved communication library in our implementation) attribute to 24.3x speedup with respect to FFTW and 7x speedup with respect to Intel MKL for 4096 3D single-precision FFT on a 16-node cluster with 32 GPUs. Around 5x speedup with respect to both standard libraries are achieved for double precision.",
"title": ""
},
{
"docid": "61051ddfb877064e477bea0131bddef4",
"text": "Portfolio diversification in capital markets is an accepted investment strategy. On the other hand corporate diversification has drawn many opponents especially the agency theorists who argue that executives must not diversify on behalf of share holders. Diversification is a strategic option used by many managers to improve their firm’s performance. While extensive literature investigates the diversification performance linkage, little agreements exist concerning the nature of this relationship. Both theoretical and empirical disagreements abound as the extensive research has neither reached a consensus nor any interpretable and acceptable findings. This paper looked at diversification as a corporate strategy and its effect on firm performance using Conglomerates in the Food and Beverages Sector listed on the ZSE. The study used a combination of primary and secondary data. Primary data was collected through interviews while secondary data were gathered from financial statements and management accounts. Data was analyzed using SPSS computer package. Three competing models were derived from literature (the linear model, Inverted U model and Intermediate model) and these were empirically assessed and tested.",
"title": ""
},
{
"docid": "f28472c17234096fa73d6bee95d99498",
"text": "The class average accuracies of different methods on the NYU V2: The Proposed Network Structure The model has a convolutional network and deconvolutional network for each modality, as well as a feature transformation network. In this structure, 1. The RGB and depth convolutional network have the same structure; 2. The deconvolutional networks are the mirrored version of the convolutional networks; 3. The feature transformation network extracts common features and modality specific features; 4. One modality can borrow the common features learned from the other modality.",
"title": ""
},
{
"docid": "bf1bcf55307b02adca47ff696be6f801",
"text": "INTRODUCTION\nMobile phones are ubiquitous in society and owned by a majority of psychiatric patients, including those with severe mental illness. Their versatility as a platform can extend mental health services in the areas of communication, self-monitoring, self-management, diagnosis, and treatment. However, the efficacy and reliability of publicly available applications (apps) have yet to be demonstrated. Numerous articles have noted the need for rigorous evaluation of the efficacy and clinical utility of smartphone apps, which are largely unregulated. Professional clinical organizations do not provide guidelines for evaluating mobile apps.\n\n\nMATERIALS AND METHODS\nGuidelines and frameworks are needed to evaluate medical apps. Numerous frameworks and evaluation criteria exist from the engineering and informatics literature, as well as interdisciplinary organizations in similar fields such as telemedicine and healthcare informatics.\n\n\nRESULTS\nWe propose criteria for both patients and providers to use in assessing not just smartphone apps, but also wearable devices and smartwatch apps for mental health. Apps can be evaluated by their usefulness, usability, and integration and infrastructure. Apps can be categorized by their usability in one or more stages of a mental health provider's workflow.\n\n\nCONCLUSIONS\nUltimately, leadership is needed to develop a framework for describing apps, and guidelines are needed for both patients and mental health providers.",
"title": ""
},
{
"docid": "42aa520e1c46749e7abc924c0f56442d",
"text": "Internet of Things is evolving heavily in these times. One of the major obstacle is energy consumption in the IoT devices (sensor nodes and wireless gateways). The IoT devices are often battery powered wireless devices and thus reducing the energy consumption in these devices is essential to lengthen the lifetime of the device without battery change. It is possible to lengthen battery lifetime by efficient but lightweight sensor data analysis in close proximity of the sensor. Performing part of the sensor data analysis in the end device can reduce the amount of data needed to transmit wirelessly. Transmitting data wirelessly is very energy consuming task. At the same time, the privacy and security should not be compromised. It requires effective but computationally lightweight encryption schemes. This survey goes thru many aspects to consider in edge and fog devices to minimize energy consumption and thus lengthen the device and the network lifetime.",
"title": ""
},
{
"docid": "7bd0d55e08ff4d94c021dd53142ef5aa",
"text": "From smart homes that prepare coffee when we wake, to phones that know not to interrupt us during important conversations, our collective visions of HCI imagine a future in which computers understand a broad range of human behaviors. Today our systems fall short of these visions, however, because this range of behaviors is too large for designers or programmers to capture manually. In this paper, we instead demonstrate it is possible to mine a broad knowledge base of human behavior by analyzing more than one billion words of modern fiction. Our resulting knowledge base, Augur, trains vector models that can predict many thousands of user activities from surrounding objects in modern contexts: for example, whether a user may be eating food, meeting with a friend, or taking a selfie. Augur uses these predictions to identify actions that people commonly take on objects in the world and estimate a user's future activities given their current situation. We demonstrate Augur-powered, activity-based systems such as a phone that silences itself when the odds of you answering it are low, and a dynamic music player that adjusts to your present activity. A field deployment of an Augur-powered wearable camera resulted in 96% recall and 71% precision on its unsupervised predictions of common daily activities. A second evaluation where human judges rated the system's predictions over a broad set of input images found that 94% were rated sensible.",
"title": ""
},
{
"docid": "0cbc2eb794f44b178a54d97aeff69c19",
"text": "Automatic identification of predatory conversations i chat logs helps the law enforcement agencies act proactively through early detection of predatory acts in cyberspace. In this paper, we describe the novel application of a deep learnin g method to the automatic identification of predatory chat conversations in large volumes of ch at logs. We present a classifier based on Convolutional Neural Network (CNN) to address this problem domain. The proposed CNN architecture outperforms other classification techn iques that are common in this domain including Support Vector Machine (SVM) and regular Neural Network (NN) in terms of classification performance, which is measured by F 1-score. In addition, our experiments show that using existing pre-trained word vectors are no t suitable for this specific domain. Furthermore, since the learning algorithm runs in a m ssively parallel environment (i.e., general-purpose GPU), the approach can benefit a la rge number of computation units (neurons) compared to when CPU is used. To the best of our knowledge, this is the first tim e that CNNs are adapted and applied to this application do main.",
"title": ""
},
{
"docid": "6ec4c9e6b3e2a9fd4da3663a5b21abcd",
"text": "In order to ensure the service quality, modern Internet Service Providers (ISPs) invest tremendously on their network monitoring and measurement infrastructure. Vast amount of network data, including device logs, alarms, and active/passive performance measurement across different network protocols and layers, are collected and stored for analysis. As network measurement grows in scale and sophistication, it becomes increasingly challenging to effectively “search” for the relevant information that best support the needs of network operations. In this paper, we look into techniques that have been widely applied in the information retrieval and search engine domain and explore their applicability in network management domain. We observe that unlike the textural information on the Internet, network data are typically annotated with time and location information, which can be further augmented using information based on network topology, protocol and service dependency. We design NetSearch, a system that pre-processes various network data sources on data ingestion, constructs index that matches both the network spatial hierarchy model and the inherent timing/textual information contained in the data, and efficiently retrieves the relevant information that network operators search for. Through case study, we demonstrate that NetSearch is an important capability for many critical network management functions such as complex impact analysis.",
"title": ""
}
] | scidocsrr |
08f2b24f0b7bc1bc200f868e5fa932a7 | Facial volume restoration of the aging face with poly-l-lactic acid. | [
{
"docid": "41ac115647c421c44d7ef1600814dc3e",
"text": "PURPOSE\nThe bony skeleton serves as the scaffolding for the soft tissues of the face; however, age-related changes of bony morphology are not well defined. This study sought to compare the anatomic relationships of the facial skeleton and soft tissue structures between young and old men and women.\n\n\nMETHODS\nA retrospective review of CT scans of 100 consecutive patients imaged at Duke University Medical Center between 2004 and 2007 was performed using the Vitrea software package. The study population included 25 younger women (aged 18-30 years), 25 younger men, 25 older women (aged 55-65 years), and 25 older men. Using a standardized reference line, the distances from the anterior corneal plane to the superior orbital rim, lateral orbital rim, lower eyelid fat pad, inferior orbital rim, anterior cheek mass, and pyriform aperture were measured. Three-dimensional bony reconstructions were used to record the angular measurements of 4 bony regions: glabellar, orbital, maxillary, and pyriform aperture.\n\n\nRESULTS\nThe glabellar (p = 0.02), orbital (p = 0.0007), maxillary (p = 0.0001), and pyriform (p = 0.008) angles all decreased with age. The maxillary pyriform (p = 0.003) and infraorbital rim (p = 0.02) regressed with age. Anterior cheek mass became less prominent with age (p = 0.001), but the lower eyelid fat pad migrated anteriorly over time (p = 0.007).\n\n\nCONCLUSIONS\nThe facial skeleton appears to remodel throughout adulthood. Relative to the globe, the facial skeleton appears to rotate such that the frontal bone moves anteriorly and inferiorly while the maxilla moves posteriorly and superiorly. This rotation causes bony angles to become more acute and likely has an effect on the position of overlying soft tissues. These changes appear to be more dramatic in women.",
"title": ""
},
{
"docid": "0802735955b52c1dae64cf34a97a33fb",
"text": "Cutaneous facial aging is responsible for the increasingly wrinkled and blotchy appearance of the skin, whereas aging of the facial structures is attributed primarily to gravity. This article purports to show, however, that the primary etiology of structural facial aging relates instead to repeated contractions of certain facial mimetic muscles, the age marker fascicules, whereas gravity only secondarily abets an aging process begun by these muscle contractions. Magnetic resonance imaging (MRI) has allowed us to study the contrasts in the contour of the facial mimetic muscles and their associated deep and superficial fat pads in patients of different ages. The MRI model shows that the facial mimetic muscles in youth have a curvilinear contour presenting an anterior surface convexity. This curve reflects an underlying fat pad lying deep to these muscles, which acts as an effective mechanical sliding plane. The muscle’s anterior surface convexity constitutes the key evidence supporting the authors’ new aging theory. It is this youthful convexity that dictates a specific characteristic to the muscle contractions conveyed outwardly as youthful facial expression, a specificity of both direction and amplitude of facial mimetic movement. With age, the facial mimetic muscles (specifically, the age marker fascicules), as seen on MRI, gradually straighten and shorten. The authors relate this radiologic end point to multiple repeated muscle contractions over years that both expel underlying deep fat from beneath the muscle plane and increase the muscle resting tone. Hence, over time, structural aging becomes more evident as the facial appearance becomes more rigid.",
"title": ""
}
] | [
{
"docid": "ecca793cace7cbf6cc142f2412847df4",
"text": "The development of capacitive power transfer (CPT) as a competitive wireless/contactless power transfer solution over short distances is proving viable in both consumer and industrial electronic products/systems. The CPT is usually applied in low-power applications, due to small coupling capacitance. Recent research has increased the coupling capacitance from the pF to the nF scale, enabling extension of CPT to kilowatt power level applications. This paper addresses the need of efficient power electronics suitable for CPT at higher power levels, while remaining cost effective. Therefore, to reduce the cost and losses single-switch-single-diode topologies are investigated. Four single active switch CPT topologies based on the canonical Ćuk, SEPIC, Zeta, and Buck-boost converters are proposed and investigated. Performance tradeoffs within the context of a CPT system are presented and corroborated with experimental results. A prototype single active switch converter demonstrates 1-kW power transfer at a frequency of 200 kHz with >90% efficiency.",
"title": ""
},
{
"docid": "0fc3976820ca76c630476647761f9c21",
"text": "Paper Mechatronics is a novel interdisciplinary design medium, enabled by recent advances in craft technologies: the term refers to a reappraisal of traditional papercraft in combination with accessible mechanical, electronic, and computational elements. I am investigating the design space of paper mechatronics as a new hands-on medium by developing a series of examples and building a computational tool, FoldMecha, to support non-experts to design and construct their own paper mechatronics models. This paper describes how I used the tool to create two kinds of paper mechatronics models: walkers and flowers and discuss next steps.",
"title": ""
},
{
"docid": "4c9d20c4d264a950cb89bd41401ec99a",
"text": "The primary goal of a recommender system is to generate high quality user-centred recommendations. However, the traditional evaluation methods and metrics were developed before researchers understood all the factors that increase user satisfaction. This study is an introduction to a novel user and item classification framework. It is proposed that this framework should be used during user-centred evaluation of recommender systems and the need for this framework is justified through experiments. User profiles are constructed and matched against other users’ profiles to formulate neighbourhoods and generate top-N recommendations. The recommendations are evaluated to measure the success of the process. In conjunction with the framework, a new diversity metric is presented and explained. The accuracy, coverage, and diversity of top-N recommendations is illustrated and discussed for groups of users. It is found that in contradiction to common assumptions, not all users suffer as expected from the data sparsity problem. In fact, the group of users that receive the most accurate recommendations do not belong to the least sparse area of the dataset.",
"title": ""
},
{
"docid": "3da6c20ba154de6fbea24c3cbb9c8ebb",
"text": "The tourism industry is characterized by ever-increasing competition, causing destinations to seek new methods to attract tourists. Traditionally, a decision to visit a destination is interpreted, in part, as a rational calculation of the costs/benefits of a set of alternative destinations, which were derived from external information sources, including e-WOM (word-of-mouth) or travelers' blogs. There are numerous travel blogs available for people to share and learn about travel experiences. Evidence shows, however, that not every blog exerts the same degree of influence on tourists. Therefore, which characteristics of these travel blogs attract tourists' attention and influence their decisions, becomes an interesting research question. Based on the concept of information relevance, a model is proposed for interrelating various attributes specific to blog's content and perceived enjoyment, an intrinsic motivation of information systems usage, to mitigate the above-mentioned gap. Results show that novelty, understandability, and interest of blogs' content affect behavioral intention through blog usage enjoyment. Finally, theoretical and practical implications are proposed. Tourism is a popular activity in modern life and has contributed significantly to economic development for decades. However, competition in almost every sector of this industry has intensified during recent years & Pan, 2008); tourism service providers are now finding it difficult to acquire and keep customers (Echtner & Ritchie, 1991; Ho, 2007). Therefore, methods of attracting tourists to a destination are receiving greater attention from researchers, policy makers, and marketers. Before choosing a destination, tourists may search for information to support their decision-making By understanding the relationships between various information sources' characteristics and destination choice, tourism managers can improve their marketing efforts. Recently, personal blogs have become an important source for acquiring travel information With personal blogs, many tourists can share their travel experiences with others and potential tourists can search for and respond to others' experiences. Therefore, a blog can be seen as an asynchronous and many-to-many channel for conveying travel-related electronic word-of-mouth (e-WOM). By using these forms of inter-personal influence media, companies in this industry can create a competitive advantage (Litvin et al., 2008; Singh et al., 2008). Weblogs are now widely available; therefore, it is not surprising that the quantity of available e-WOM has increased (Xiang & Gret-zel, 2010) to an extent where information overload has become a Empirical evidence , however, indicates that people may not consult numerous blogs for advice; the degree of inter-personal influence varies from blog to blog (Zafiropoulos, 2012). Determining …",
"title": ""
},
{
"docid": "8a91835866267ef83ba245c12ce1283d",
"text": "Due to the increasing demand in the agricultural industry, the need to effectively grow a plant and increase its yield is very important. In order to do so, it is important to monitor the plant during its growth period, as well as, at the time of harvest. In this paper image processing is used as a tool to monitor the diseases on fruits during farming, right from plantation to harvesting. For this purpose artificial neural network concept is used. Three diseases of grapes and two of apple have been selected. The system uses two image databases, one for training of already stored disease images and the other for implementation of query images. Back propagation concept is used for weight adjustment of training database. The images are classified and mapped to their respective disease categories on basis of three feature vectors, namely, color, texture and morphology. From these feature vectors morphology gives 90% correct result and it is more than other two feature vectors. This paper demonstrates effective algorithms for spread of disease and mango counting. Practical implementation of neural networks has been done using MATLAB.",
"title": ""
},
{
"docid": "c9e47bfe0f1721a937ba503ed9913dba",
"text": "The Web contains a vast amount of structured information such as HTML tables, HTML lists and deep-web databases; there is enormous potential in combining and re-purposing this data in creative ways. However, integrating data from this relational web raises several challenges that are not addressed by current data integration systems or mash-up tools. First, the structured data is usually not published cleanly and must be extracted (say, from an HTML list) before it can be used. Second, due to the vastness of the corpus, a user can never know all of the potentially-relevant databases ahead of time (much less write a wrapper or mapping for each one); the source databases must be discovered during the integration process. Third, some of the important information regarding the data is only present in its enclosing web page and needs to be extracted appropriately. This paper describes Octopus, a system that combines search, extraction, data cleaning and integration, and enables users to create new data sets from those found on the Web. The key idea underlying Octopus is to offer the user a set of best-effort operators that automate the most labor-intensive tasks. For example, the Search operator takes a search-style keyword query and returns a set of relevance-ranked and similarity-clustered structured data sources on the Web; the Context operator helps the user specify the semantics of the sources by inferring attribute values that may not appear in the source itself, and the Extend operator helps the user find related sources that can be joined to add new attributes to a table. Octopus executes some of these operators automatically, but always allows the user to provide feedback and correct errors. We describe the algorithms underlying each of these operators and experiments that demonstrate their efficacy.",
"title": ""
},
{
"docid": "c32d61da51308397d889db143c3e6f9d",
"text": "Children’s neurological development is influenced by their experiences. Early experiences and the environments in which they occur can alter gene expression and affect long-term neural development. Today, discretionary screen time, often involving multiple devices, is the single main experience and environment of children. Various screen activities are reported to induce structural and functional brain plasticity in adults. However, childhood is a time of significantly greater changes in brain anatomical structure and connectivity. There is empirical evidence that extensive exposure to videogame playing during childhood may lead to neuroadaptation and structural changes in neural regions associated with addiction. Digital natives exhibit a higher prevalence of screen-related ‘addictive’ behaviour that reflect impaired neurological rewardprocessing and impulse-control mechanisms. Associations are emerging between screen dependency disorders such as Internet Addiction Disorder and specific neurogenetic polymorphisms, abnormal neural tissue and neural function. Although abnormal neural structural and functional characteristics may be a precondition rather than a consequence of addiction, there may also be a bidirectional relationship. As is the case with substance addictions, it is possible that intensive routine exposure to certain screen activities during critical stages of neural development may alter gene expression resulting in structural, synaptic and functional changes in the developing brain leading to screen dependency disorders, particularly in children with predisposing neurogenetic profiles. There may also be compound/secondary effects on neural development. Screen dependency disorders, even at subclinical levels, involve high levels of discretionary screen time, inducing greater child sedentary behaviour thereby reducing vital aerobic fitness, which plays an important role in the neurological health of children, particularly in brain structure and function. Child health policy must therefore adhere to the principle of precaution as a prudent approach to protecting child neurological integrity and well-being. This paper explains the basis of current paediatric neurological concerns surrounding screen dependency disorders and proposes preventive strategies for child neurology and allied professions.",
"title": ""
},
{
"docid": "910fdcf9e9af05b5d1cb70a9c88e4143",
"text": "We propose NEURAL ENQUIRER — a neural network architecture for answering natural language (NL) questions given a knowledge base (KB) table. Unlike previous work on end-to-end training of semantic parsers, NEURAL ENQUIRER is fully “neuralized”: it gives distributed representations of queries and KB tables, and executes queries through a series of differentiable operations. The model can be trained with gradient descent using both endto-end and step-by-step supervision. During training the representations of queries and the KB table are jointly optimized with the query execution logic. Our experiments show that the model can learn to execute complex NL queries on KB tables with rich structures.",
"title": ""
},
{
"docid": "c56c392e1a7d58912eeeb1718379fa37",
"text": "The changing face of technology has played an integral role in the development of the hotel and restaurant industry. The manuscript investigated the impact that technology has had on the hotel and restaurant industry. A detailed review of the literature regarding the growth of technology in the industry was linked to the development of strategic direction. The manuscript also looked at the strategic analysis methodology for evaluating and taking advantage of current and future technological innovations for the hospitality industry. Identification and implementation of these technologies can help in building a sustainable competitive advantage for hotels and restaurants.",
"title": ""
},
{
"docid": "1040e96ab179d5705eeb2983bdef31d3",
"text": "Neural network-based systems can now learn to locate the referents of words and phrases in images, answer questions about visual scenes, and even execute symbolic instructions as first-person actors in partially-observable worlds. To achieve this so-called grounded language learning, models must overcome certain well-studied learning challenges that are also fundamental to infants learning their first words. While it is notable that models with no meaningful prior knowledge overcome these learning obstacles, AI researchers and practitioners currently lack a clear understanding of exactly how they do so. Here we address this question as a way of achieving a clearer general understanding of grounded language learning, both to inform future research and to improve confidence in model predictions. For maximum control and generality, we focus on a simple neural network-based language learning agent trained via policy-gradient methods to interpret synthetic linguistic instructions in a simulated 3D world. We apply experimental paradigms from developmental psychology to this agent, exploring the conditions under which established human biases and learning effects emerge. We further propose a novel way to visualise and analyse semantic representation in grounded language learning agents that yields a plausible computational account of the observed effects.",
"title": ""
},
{
"docid": "b0d959bdb58fbcc5e324a854e9e07b81",
"text": "It is well known that the road signs play’s a vital role in road safety its ignorance results in accidents .This Paper proposes an Idea for road safety by using a RFID based traffic sign recognition system. By using it we can prevent the road risk up to a great extend.",
"title": ""
},
{
"docid": "659e71fb9274c47f369c37de751a91b2",
"text": "The Timed Up and Go (TUG) is a clinical test used widely to measure balance and mobility, e.g. in Parkinson's disease (PD). The test includes a sequence of functional activities, namely: sit-to-stand, 3-meters walk, 180° turn, walk back, another turn and sit on the chair. Meanwhile the stopwatch is used to score the test by measuring the time which the patients with PD need to perform the test. Here, the work presents an instrumented TUG using a wearable inertial sensor unit attached on the lower back of the person. The approach is used to automate the process of assessment compared with the manual evaluation by using visual observation and a stopwatch. The developed algorithm is based on the Dynamic Time Warping (DTW) for multi-dimensional time series and has been applied with the augmented feature for detection and duration assessment of turn state transitions, while a 1-dimensional DTW is used to detect the sit-to-stand and stand-to-sit phases. The feature set is a 3-dimensional vector which consists of the angular velocity, derived angle and features from Linear Discriminant Analysis (LDA). The algorithm was tested on 10 healthy individuals and 20 patients with PD (10 patients with early and late disease phases respectively). The test demonstrates that the developed technique can successfully extract the time information of the sit-to-stand, both turns and stand-to-sit transitions in the TUG test.",
"title": ""
},
{
"docid": "3e83f454f66e8aba14733205c8e19753",
"text": "BACKGROUND\nNormal-weight adults gain lower-body fat via adipocyte hyperplasia and upper-body subcutaneous (UBSQ) fat via adipocyte hypertrophy.\n\n\nOBJECTIVES\nWe investigated whether regional fat loss mirrors fat gain and whether the loss of lower-body fat is attributed to decreased adipocyte number or size.\n\n\nDESIGN\nWe assessed UBSQ, lower-body, and visceral fat gains and losses in response to overfeeding and underfeeding in 23 normal-weight adults (15 men) by using dual-energy X-ray absorptiometry and abdominal computed tomography scans. Participants gained ∼5% of weight in 8 wk and lost ∼80% of gained fat in 8 wk. We measured abdominal subcutaneous and femoral adipocyte sizes and numbers after weight gain and loss.\n\n\nRESULTS\nVolunteers gained 3.1 ± 2.1 (mean ± SD) kg body fat with overfeeding and lost 2.4 ± 1.7 kg body fat with underfeeding. Although UBSQ and visceral fat gains were completely reversed after 8 wk of underfeeding, lower-body fat had not yet returned to baseline values. Abdominal and femoral adipocyte sizes, but not numbers, decreased with weight loss. Decreases in abdominal adipocyte size and UBSQ fat mass were correlated (ρ = 0.76, P = 0.001), as were decreases in femoral adipocyte size and lower-body fat (ρ = 0.49, P = 0.05).\n\n\nCONCLUSIONS\nUBSQ and visceral fat increase and decrease proportionately with a short-term weight gain and loss, whereas a gain of lower-body fat does not relate to the loss of lower-body fat. The loss of lower-body fat is attributed to a reduced fat cell size, but not number, which may result in long-term increases in fat cell numbers.",
"title": ""
},
{
"docid": "2c8e50194e4b2238b9af86806323e2c5",
"text": "Previous research suggests a possible link between eveningness and general difficulties with self-regulation (e.g., evening types are more likely than other chronotypes to have irregular sleep schedules and social rhythms and use substances). Our study investigated the relationship between eveningness and self-regulation by using two standardized measures of self-regulation: the Self-Control Scale and the Procrastination Scale. We predicted that an eveningness preference would be associated with poorer self-control and greater procrastination than would an intermediate or morningness preference. Participants were 308 psychology students (mean age=19.92 yrs) at a small Canadian college. Students completed the self-regulation questionnaires and Morningness/Eveningness Questionnaire (MEQ) online. The mean MEQ score was 46.69 (SD=8.20), which is intermediate between morningness and eveningness. MEQ scores ranged from definite morningness to definite eveningness, but the dispersion of scores was skewed toward more eveningness. Pearson and partial correlations (controlling for age) were used to assess the relationship between MEQ score and the Self-Control Scale (global score and 5 subscale scores) and Procrastination Scale (global score). All correlations were significant. The magnitude of the effects was medium for all measures except one of the Self-Control subscales, which was small. A multiple regression analysis to predict MEQ score using the Self-Control Scale (global score), Procrastination Scale, and age as predictors indicated the Self-Control Scale was a significant predictor (accounting for 20% of the variance). A multiple regression analysis to predict MEQ scores using the five subscales of the Self-Control Scale and age as predictors showed the subscales for reliability and work ethic were significant predictors (accounting for 33% of the variance). Our study showed a relationship between eveningness and low self-control, but it did not address whether the relationship is a causal one.",
"title": ""
},
{
"docid": "81b3562907a19a12f02b82f927d89dc7",
"text": "Warehouse automation systems that use robots to save human labor are becoming increasingly common. In a previous study, a picking system using a multi-joint type robot was developed. However, articulated robots are not ideal in warehouse scenarios, since inter-shelf space can limit their freedom of motion. Although the use of linear motion-type robots has been suggested as a solution, their drawback is that an additional cable carrier is needed. The authors therefore propose a new configuration for a robot manipulator that uses wireless power transmission (WPT), which delivers power without physical contact except at the base of the robot arm. We describe here a WPT circuit design suitable for rotating and sliding-arm mechanisms. Overall energy efficiency was confirmed to be 92.0%.",
"title": ""
},
{
"docid": "3609f4923b9aebc3d18f31ac6ae78bea",
"text": "Cloud computing is playing an ever larger role in the IT infrastructure. The migration into the cloud means that we must rethink and adapt our security measures. Ultimately, both the cloud provider and the customer have to accept responsibilities to ensure security best practices are followed. Firewalls are one of the most critical security features. Most IaaS providers make firewalls available to their customers. In most cases, the customer assumes a best-case working scenario which is often not assured. In this paper, we studied the filtering behavior of firewalls provided by five different cloud providers. We found that three providers have firewalls available within their infrastructure. Based on our findings, we developed an open-ended firewall monitoring tool which can be used by cloud customers to understand the firewall's filtering behavior. This information can then be efficiently used for risk management and further security considerations. Measuring today's firewalls has shown that they perform well for the basics, although may not be fully featured considering fragmentation or stateful behavior.",
"title": ""
},
{
"docid": "b3f5d9335cccf62797c86b76fa2c9e7e",
"text": "For most families with elderly relatives, care within their own home is by far the most preferred option both for the elderly and their carers. However, frequently these carers are the partners of the person with long-term care needs, and themselves are elderly and in need of support to cope with the burdens and stress associated with these duties. When it becomes too much for them, they may have to rely on professional care services, or even use residential care for a respite. In order to support the carers as well as the elderly person, an ambient assisted living platform has been developed. The system records information about the activities of daily living using unobtrusive sensors within the home, and allows the carers to record their own wellbeing state. By providing facilities to schedule and monitor the activities of daily care, and providing orientation and advice to improve the care given and their own wellbeing, the system helps to reduce the burden on the informal carers. Received on 30 August 2016; accepted on 03 February 2017; published on 21 March 2017",
"title": ""
},
{
"docid": "60971d26877ef62b816526f13bd76c24",
"text": "Breast cancer is one of the leading causes of cancer death among women worldwide. In clinical routine, automatic breast ultrasound (BUS) image segmentation is very challenging and essential for cancer diagnosis and treatment planning. Many BUS segmentation approaches have been studied in the last two decades, and have been proved to be effective on private datasets. Currently, the advancement of BUS image segmentation seems to meet its bottleneck. The improvement of the performance is increasingly challenging, and only few new approaches were published in the last several years. It is the time to look at the field by reviewing previous approaches comprehensively and to investigate the future directions. In this paper, we study the basic ideas, theories, pros and cons of the approaches, group them into categories, and extensively review each category in depth by discussing the principles, application issues, and advantages/disadvantages. Keyword: breast ultrasound (BUS) images; breast cancer; segmentation; benchmark; early detection; computer-aided diagnosis (CAD)",
"title": ""
},
{
"docid": "da5562859bfed0057e0566679a4aca3d",
"text": "Machine-to-Machine (M2M) paradigm enables machines (sensors, actuators, robots, and smart meter readers) to communicate with each other with little or no human intervention. M2M is a key enabling technology for the cyber-physical systems (CPSs). This paper explores CPS beyond M2M concept and looks at futuristic applications. Our vision is CPS with distributed actuation and in-network processing. We describe few particular use cases that motivate the development of the M2M communication primitives tailored to large-scale CPS. M2M communications in literature were considered in limited extent so far. The existing work is based on small-scale M2M models and centralized solutions. Different sources discuss different primitives. Few existing decentralized solutions do not scale well. There is a need to design M2M communication primitives that will scale to thousands and trillions of M2M devices, without sacrificing solution quality. The main paradigm shift is to design localized algorithms, where CPS nodes make decisions based on local knowledge. Localized coordination and communication in networked robotics, for matching events and robots, were studied to illustrate new directions.",
"title": ""
},
{
"docid": "72d74a0eaa768f46b17bf75f1a059d3f",
"text": "Cloud gaming represents a highly interactive service whereby game logic is rendered in the cloud and streamed as a video to end devices. While benefits include the ability to stream high-quality graphics games to practically any end user device, drawbacks include high bandwidth requirements and very low latency. Consequently, a challenge faced by cloud gaming service providers is the design of algorithms for adapting video streaming parameters to meet the end user system and network resource constraints. In this paper, we conduct an analysis of the commercial NVIDIA GeForce NOW game streaming platform adaptation mechanisms in light of variable network conditions. We further conduct an empirical user study involving the GeForce NOW platform to assess player Quality of Experience when such adaptation mechanisms are employed. The results provide insight into limitations of the currently deployed mechanisms, as well as aim to provide input for the proposal of designing future video encoding adaptation strategies.",
"title": ""
}
] | scidocsrr |
Subsets and Splits